AI Book 9 Part B Answer Key Updated 1
AI Book 9 Part B Answer Key Updated 1
Artificial Intelligence
(Answer Key-Part B)
Ki
Class 9
2. The increasing automation of jobs is causing concern that AI will replace individuals in the workforce.
This will cause a large amount of job displacement and result in loss of livelihood.
3. AI technology in the wrong hands can lead to its misuse, which could prove dangerous for humans. It can
be used to create autonomous weapons, hack into sensitive databases, and carry out fraudulent activities.
AI can be used to spread misinformation by creating fake videos and images, called deep fakes, that can
cause harm and confusion.
4. Project deployment is the process of implementing an AI model in a real-world scenario. The model is
ps
integrated into the desired software or system and packaged in such a way that it can be used for
practical applications.
5. The people who are directly or indirectly affected by a problem are referred to as Stakeholders.
Stakeholders are involved in the problem and are benefitted by the solution arrived at for the problem.
The "Who" block of the 4Ws problem-solving canvas in the "Problem-scoping and Goal Setting" stage
of the AI project cycle helps us identify the stakeholders.
Ki
6. Sensors are devices that detect and measure environmental conditions, such as temperature,
pressure, light, sound, and motion. They convert these physical parameters into electrical signals or
digital data that can be processed and analysed by AI systems.
7. Machine learning application: It is used for predicting the weather forecast for the next seven days
based on data from the previous year and the previous week.
Deep learning application: It is used in driverless cars to identify a person crossing the road.
8. System Maps are visual diagrams that help us to see the different parts or elements of our AI project
and how they are connected or related to each other. They can be used to understand the system's
boundaries and how it interacts with elements in the surroundings.
ps
• The data has been unethically manipulated
• It is based on historic data which itself is biased.
Accountability: One of the biggest problems brought about by AI decision-making is who should be
blamed or held accountable when an AI causes harm. For example, if a self-driving car makes an
autonomous decision to leave a highway at high speed to avoid an obstacle and crashes into another
vehicle, we cannot take the self-driving car in front of a court to face justice. Even if we did, there are
no legal rules that can be applied to the case.
Job Displacement: The increasing automation of jobs is causing concern that AI will replace individuals
in the workforce.
Threat to Human Rights: AI should not replace jobs that require empathy, emotional connection, care,
and concern for other people. Such jobs include teachers, nurses, social workers, lawyers, judges,
defence personnel, HR managers, etc. These jobs require empathy and a human touch, as well as
ps
providing emotional support and understanding, which makes the concerned people feel valued.
Human Interaction: Increased interaction with AI may affect the relationships that people have with
other humans as people may not accept or prefer their real identities.
4. Increased efficiency and consistency: AI can analyse large amounts of data much faster and more
accurately than humans. For example, AI-powered chatbots can provide customer support 24/7 on
sites, like Amazon and Flipkart, without getting tired or making mistakes.
Error-free work: Humans are likely to make errors while carrying out tasks due to differences in the
Ki
abilities of individuals or their emotional state. AI machines are accurately programmed to carry out
specific tasks and help reduce unnecessary errors and losses. An example is space exploration
programs where there is no scope for errors since the AI-enabled devices must carry out tasks on their
own without instructions from humans. Even a simple error can result in huge losses.
5. A system map shows cause and effect relationship of elements with each other in a system with the
help of arrows. The arrowheads depict the direction of the effect and the sign (+ or -) shows their
relationship. If the arrow goes from X to Y with a +ve sign, it means that both are directly related to
each other. That is, if X increases, Y also increases and vice versa. On the other hand, if the arrow goes
from X to Y with a –ve sign, it means that both the elements are inversely related to each other which
means if X increases, Y will decrease and vice versa.
• Problem scoping: The first step is to understand and define the problem that we want AI to solve.
Problem scoping is the stage where we set clear goals and outline the objectives of the AI project.
• Training Data: Training data is the initial dataset used to train an AI module. It is a set of examples
•
ps
that helps the AI model learn and identify patterns or perform particular tasks. We must ensure that
the data used to train the AI model is aligned with the problem statement scoped and is sufficient,
relevant, accurate, and wide-ranging.
Testing Data: Testing data is used to evaluate the performance of the AI module. It is data that the
AI algorithm has not seen before and allows us to check the accuracy of the AI module. The testing
data should represent the information that the AI model will encounter practically in real-world
situations.
8. The data features to classify images of animals into different species could be:
Ki
• Color: Distribution and intensity of colors in the images.
• Shape: Shape of the animals.
• Texture: Surface texture like fur, feathers, or scales.
• Size: Proportions and dimensions of animals in the images.
• Patterns: Patterns unique to different species.
9. a. Surveys: A survey is a method of gathering specific information from a group of people by asking
them questions. This enables us to collect valuable data quickly and efficiently. Surveys can be
conducted on paper, through face-to-face or telephone interviews, or through online forms. For
example, population census surveys are conducted once every ten years for population analysis.
b. APIs: APIs are programs used by developers to acquire data from other programs, services or
databases to extract relevant data required for the AI porject. For example if there is an AI project
involving sentiment analysis, developers can use a social media API to access user posts or comments
from Twitter or facebook. Here data acquisition is done automatically through special programs.
ps
11. Differences between rule-based approach and learning-based approach of AI modelling are:
Rule-based approach
• The machine follows the rules defined
by the developer
• AI is achieved through the rule-based
Learning-based approach
• The machine learns on its own from
data
• This is achieved through learning
technique technique
Ki
• Typically uses labeled data • Can handle both labeled and unlabeled
• May require less training time data
• Requires more training time
12. Line chart: A line chart is a chart that is created by plotting a series of points that are connected with
the help of a line and is used to track changes in values over a period of time.
Bar chart: A bar chart is a chart that presents categorical or grouped data with rectangular bars
where the height or length of the bars is proportional to the values that they represent.
13. Pie Chart shows proportions of a whole with sectors proportional to data quantities. It is suitable for
comparing categories within a single dataset.
Area Chart displays trends over time with shaded areas below lines representing data quantities. It is
ideal for visualising changes in data over continuous time intervals or comparing multiple datasets.
1. The data pyramid is a hierarchical structure used in data literacy to represent the progress of data
from its raw form to actionable insights. The following is an illustration of the Data Pyramid, which is
made up of the different stages in the process of working with data:
2.
ps
Quantitative data interpretation is made on numeric data. It helps us answer questions involving
‘when’, ‘how many’, and ‘how often’. For example, (how many) number of likes on the Instagram
post. It can be expressed using finite numbers.
3. Textual data interpretation involves analysing and drawing conclusions from non-numeric data, such
Ki
as written text from a variety of sources (social media posts, surveys, polls).
4. The process of continuously acquiring, developing, and improving the ability to understand, interpret,
and use data effectively is called cultivating data literacy. data literacy gives you the ability to analyse
and get valuable insights from the massive amount of data that surrounds us in our daily lives.
5. Data acquisition is the process of acquiring or collecting accurate and reliable data from relevant
sources. The collected data is used for decision-making, analysis, forecasting, and visualisation.
6. Data visualisation is a technique that provides a better understanding of data and helps in gaining
insights from it. Data visualisation is a broad term that includes any graphic that helps you
understand or gain new insights from data.
7. Tableau is a popular data visualisation tool. It transforms the way you use data to solve the problems.
It is used to create charts, graphs, and dashboards, making data more comprehensible and
actionable.
• Reduce Cost: Identifying needs can lead to a reduction in cost. For example, a restaurant
owner could decide to drop/modify some dishes on the menu that are not popular or have
got bad reviews.
• Identifying Needs: You can identify the needs of people by data interpretation. For example,
Veg Farmhouse Pizza is a popular choice among the age group 8-10.
•
ps
6. The data pyramid is a hierarchical structure used in data literacy to represent the progress of data
from its raw form to actionable insights. Different stages of a data pyramid, starting from bottom
are data, information, knowledge and wisdom. Let us move from bottom to top to understand the
different stages of Data Pyramid:
• Initially, data exists in its raw form, which is not very useful. Example: It is like scattered
•
pieces of a puzzle or a pile of ingredients before cooking a meal.
Data is processed through various methods, like analysing and organising raw data, to
provide meaningful information. Processing data makes it easier to understand and
interpret. Example: It is like arranging the scattered pieces of a puzzle or using the
Ki
ingredients to make a delicious dish.
• The processed data transforms information to knowledge. It helps understand how things
are happening in the world around us. Example: It is like understanding how joining the
scattered pieces of a puzzle reveals the complete image, or understanding what
ingredients and steps are involved in making a dish.
7. Independent features: These variables are the input to the model. They are the information you
provide to make predictions. These variables are also called the predictor or input variables. These
features are not influenced by other variables, but they are used to determine the outcome. There
Dependent features: These variables are the outputs or results of the model. They are what you
are trying to predict. These variables are also called the response or outcome variables. The value
of the dependent feature is influenced by the independent feature. Typically, there is only one
dependent feature available, though you can have more than one in some cases. For example,
considering the same example of predicting the health of a patient, the dependent features will be
based on the input variables. In this case, the dependent feature will decide whether the patient
has the disease or not.
8. Graphical data interpretation involves the analysis of data represented in the form of graphs, like
bar graphs, line graphs, pie graphs, etc.
Bar Graph: In a bar graph, data is represented using vertical and horizontal bars.
ps
Pie Charts: Pie charts have the shape of a pie, and each slice of the pie represents a portion of the
entire pie, allocated to each category. It is a circular chart divided into various sections (think of a
cake cut into slices). Each section of the pie chart is proportional to the corresponding value.
Ki
2. Patterns are the ordered sequence, relation, or arrangement between objects. Patterns exist all
around us. Examples: Table of any number: 2,4,6,8,10,12,14.
3. Calculus helps in the training and optimisation of machine learning models. Calculus provides
different minimisation functions, which are used in AI algorithms, to find the best solutions to
problems.
ps
4. Certain Event: An event for which you are sure that it will happen.
5. Statistics has a wide range of applications across various fields, like healthcare and medicine,
business and economics, social science, engineering and technology, and so on. Statistics plays a
crucial role in predicting the performance of the sports team, analysing the opinions of the voters,
Ki
understanding the reading levels of students, etc.
Disease Prediction: A very good example of disease prediction through statistics is the Covid-19
pandemic. The government analysed the areas where COVID cases were increasing or where the
vaccination drive needed to be improved.
Weather Forecasting: Weather forecasting is also done with the help of statistical analysis. The
future weather conditions are predicted with the help of data of past weather conditions.
7. Tally marks are used to represent data in graphical form in statistics. They help in observing or
exploring data. The graph table has one vertical line, which is made for each of the first four
numbers, and the fifth number is represented by a diagonal line across the previous four.
8. Probability is a way to tell how likely something is to happen. The following equation defines the
likelihood of an event happening:
Probability of an Event = Number of Favourable Outcomes / Total Number of Possible Outcomes
ps
B. Long answer type questions.
1. Statistics in disaster management are used as follows:
Risk assessment: By analysing historical data, statistical models can assess risks of hazard
occurrence and alert the people who might be affected by them.
Early warning: By using weather forecasts, seismic activity, hydrological data, and satellite
imagery, statistical models can be used to predict the chances of the occurrence of hazards, which
may help authorities take preventive decisions.
Ki
Resource allocation and logistics: By analysing demographic data, population density,
infrastructure maps, and transportation networks, statistics can help in taking decisions regarding
the need for emergency supplies, equipment, and facilities.
Public health monitoring: Statistical modeling helps forecast disease transmission, assess the
effectiveness of treatments, and guide public health response strategies.
2.
No. of books borrowed Tally marks Frequency Count
1 IIII 4
2 IIII 4
3 III 3
4 II 2
5 II 2
•30% chance of rain means there is a 30% probability that at least 0.01 inches of rain will fall
somewhere in the forecast area.
• A forecast might state there is a 70% chance of temperatures falling between 65°F and
75°F.
• Probability is used to predict extreme weather events such as storms, tornadoes, and
floods, providing probabilities for the occurrence and potential impact of these events.
• For example, a forecast might predict a 10% chance of a tornado forming in a specific
region.
• Long term climate prediction can be done with the help of probability. For example,
probability of rise in temperature or the probability of rainfall in a particular season
in a particular area.
• An advisory message can be circulated if there is an 80% probability that temperatures
3.
ps
will exceed a certain threshold.
ps
• Reuse of existing data: Generative AI cannot create entirely new concepts or ideas from
scratch. It remixes and rearranges existing information to generate output.
6. Runway ML is a creative platform that offers a collection of pre-trained machine learning models
that can be used for many tasks, like image and video generation, style transfer, object detection,
and more. It provides an easy-to-use interface for building, training, and deploying various types of
generative models, including GANs, VAEs, and image classifiers. It enables users to incorporate AI
into their creative workflows without needing extensive programming knowledge.
7. Example of Supervised Learning: Suppose you have trained a machine (model) with the help of a
machine learning algorithm and images of animals, with each image labeled to indicate the animal it
depicts: 'Elephant', 'Camel', and 'Cow'. This means the model was trained using the supervised
machine learning method. After training, this model will be able to identify animals in new images
that you provide as input. Notice the given figure, where the provided input image allows the model
to correctly identify the 'Elephant', 'Camel', and 'Cow'.
9. Variational Autoencoders (VAEs) are another class of generative models are VAEs. VAEs learn the
distribution of data and then generate new data with it. They take data (images, text, etc.) and
condense it into a smaller, more manageable form. This compressed version captures the essence
of the data. Then, the VAEs can use this compressed form to generate entirely new data points that
resemble the originals, but with a twist of creativity. This makes VAEs useful for generating new
content that looks like the original data on which they were trained.
10. Autoencoders are neural networks that are trained to learn a compressed representation of data.
First, the data is compressed and then decompressed to restore it to its original form. They
generate highly realistic samples.
ps
B. Long answer type questions.
1. Supervised learning works on labelled datasets. It is a task-driven process where models learn to
make predictions or decisions based on input data and corresponding output labels. In a supervised
learning model, we divide the data into a training dataset and a test dataset. The training dataset is
used to train the model, while the test dataset is used to evaluate the model's accuracy in
predicting output. Unsupervised learning works on unlabelled datasets. It is a data-driven process
that allows the algorithm to explore the data and find its own insights by identifying relationships,
patterns, and trends. Its goal is to find similarities and differences between data points.
Ki
2. The benefits of generative AI are as follows:
Creativity: Generative AI tools can generate different types of content in a very attractive and
creative manner. For example, content can be generated in the form of an image, video, or audio,
which is a better medium of communication as compared to textual information.
Efficiency: AI tools can generate summarised and precise information, which may be very useful
and save a lot of time.
Personalisation: Generative AI tools can generate content in the form of text, image, or video as per
the user's requirements. For example, if you have an imagination of a scene that may have a red
rose in a river, you can generate that scene.
Exploration: Generative AI can be used to explore new design spaces or optimise complex
systems, such as designing new medicines or improving industrial processes. It helps in developing
and giving strength to an idea because it can draw your imagination. Also, generative AI tools can be
used to generate huge datasets for training and testing other AI models.
Accessibility: Many generative AI tools are freely available and easy to use. By entering a prompt
into the tool, you get whatever you want. For example, tools like GPT-3 can write articles, generate
marketing copy, and draft emails.
ps
Applications
innovative, and often unexpected
output.
Reuse of existing data: Generative AI cannot create entirely new concepts or ideas from
scratch. It remixes and rearranges existing information to generate output.
Limited understanding of context: It cannot understand long contexts. The question that you
asked two hours ago may not be linked to the question that you are asking right now.
Ethical issues: The ability of generative AI to generate realistic content raises ethical concerns.
High computational demands: It is easy to use generative AI models. But it is very complex to
develop generative AI models because they require huge amount of data and processing
capacity.
6. While generative AI offers many benefits, several ethical considerations should be considered
ps
when using this technology.
Ownership: It is always a concern to determine who will be the owner of the content generated
by generative AI. This is particularly relevant in creative fields, such as music, literature, or art,
where generative AI can create original work, although it may be based on some existing
content.
Human autonomy: Uncontrolled use of generative AI may create a question mark on human
autonomy. As technology becomes more sophisticated, it may become increasingly difficult to
distinguish between content generated by humans and machines.
Biased outcome: If the data used for training is biased, then the generative AI model will also
Ki
generate biased outcomes. This may lead to inaccurate results.
Misuse of tools: There are many generative AI tools, like Deepfake, that can be used to generate
fake images, videos, and content that can be misused for criminal activity.
Privacy: Generative AI can potentially be used to generate sensitive personal information, such
as credit card numbers, social security numbers, or medical records. This could be used for
malicious purposes.
7. Generative Adversarial Networks (GANs): Generative Adversarial Network uses two neural
networks that work in collaboration to generate new data. The names of these two neural
networks are generator network and discriminator network. The task of the generator network is
to produce new data, while the task of the discriminator network is to analyse the generated data
and provide feedback. The generator network generates the data until it receives feedback from
the discriminator network that the generated data is like real data. Examples: Generating fake
images, creating portraits of non-existing people, converting images from day to night, generating
images based on textual descriptions, generating realistic videos, etc.
Variational Autoencoders (VAEs): Another class of generative models is VAEs. VAEs learn the
distribution of data and then generate new data with it. They take data (images, text, etc.) and
8. Architecture: Generative AI can create innovative and complex architectural designs that may
not be easily conceived by human designers. AI can generate realistic 3D models and
visualizations of architectural designs, helping architects and clients better understand the
outcome.
Coding: Generative AI can analyze code and detect potential bugs or vulnerabilities, enhancing
the overall quality and security of software. AI-powered coding assistants can help learners
understand programming concepts and provide real-time feedback on their code.
Music: Generative AI is also being used to create new music, create original compositions, and
ps
even collaborate with AI to produce innovative music. AI can suggest instrumentation and
orchestration arrangements for a specific song.
Ki