83% found this document useful (6 votes)
8K views

AI Book 9 Part B Answer Key Updated 1

Class IX

Uploaded by

soumya2010jain
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
83% found this document useful (6 votes)
8K views

AI Book 9 Part B Answer Key Updated 1

Class IX

Uploaded by

soumya2010jain
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

ps

Artificial Intelligence
(Answer Key-Part B)
Ki
Class 9

© Kips Learning Pvt. Ltd 2024


Unit 1: Introduction to AI
A. Short answer type questions.
1. Artificial intelligence is the ability of machines to perform tasks that typically require human-like
intelligence, such as visual perception, speech recognition, decision-making, and language translation.

2. The increasing automation of jobs is causing concern that AI will replace individuals in the workforce.
This will cause a large amount of job displacement and result in loss of livelihood.

3. AI technology in the wrong hands can lead to its misuse, which could prove dangerous for humans. It can
be used to create autonomous weapons, hack into sensitive databases, and carry out fraudulent activities.
AI can be used to spread misinformation by creating fake videos and images, called deep fakes, that can
cause harm and confusion.

4. Project deployment is the process of implementing an AI model in a real-world scenario. The model is

ps
integrated into the desired software or system and packaged in such a way that it can be used for
practical applications.

5. The people who are directly or indirectly affected by a problem are referred to as Stakeholders.
Stakeholders are involved in the problem and are benefitted by the solution arrived at for the problem.
The "Who" block of the 4Ws problem-solving canvas in the "Problem-scoping and Goal Setting" stage
of the AI project cycle helps us identify the stakeholders.
Ki
6. Sensors are devices that detect and measure environmental conditions, such as temperature,
pressure, light, sound, and motion. They convert these physical parameters into electrical signals or
digital data that can be processed and analysed by AI systems.

7. Machine learning application: It is used for predicting the weather forecast for the next seven days
based on data from the previous year and the previous week.
Deep learning application: It is used in driverless cars to identify a person crossing the road.

8. System Maps are visual diagrams that help us to see the different parts or elements of our AI project
and how they are connected or related to each other. They can be used to understand the system's
boundaries and how it interacts with elements in the surroundings.

9. Data visualisation is important as it helps us in the following ways:


• Simplifies complex data thus making it easier to comprehend.
• Helps gain a deeper understanding of the trends, relationships and patterns present within the
data.

© Kips Learning Pvt. Ltd 2024


• Uncovers hidden relationships or anomalies (odd behaviour) that may not be immediately
apparent.
• Helps us in selecting models for the subsequent AI Project Cycle stage.
• Makes it easy to communicate insights to others, even to non-technical persons.

B. Long answer type questions.


1. a. Privacy issue: AI is dependent on vast amounts of data. At times, personal data may be accessed by
AI algorithms without the knowledge of the concerned individual. There are chances that this data may
be misused for fraudulent activities.
b. There is a concern that automation may replace human jobs leading to job displacement. This may
cause workers to lose their jobs and lead to loss of livelihood.
c. AI bias occurs when algorithms produce biased results due to biased training data, posing risks in
decision-making processes. Types of bias include data bias, sampling bias, gender bias and historical
bias. The data fed into an AI algorithm could cause bias because of the following three reasons:
• The data does not reflect the main population.

ps
• The data has been unethically manipulated
• It is based on historic data which itself is biased.

2. Historical Development of AI:


1940s-1950s: The origins of AI can be traced back to the work of pioneers, like Alan Turing, John
McCarthy, and Marvin Minsky. They laid the foundations for AI research, exploring concepts, like
machine learning, logic, and reasoning.
1960s-1970s: This period is often referred to as the 'AI winter', as progress in AI research slowed down
Ki
due to limited computing power and funding. However, researchers continued to work on expert
systems that could simulate the knowledge and reasoning of human experts.
1980s-1990s: This era saw a revival of AI research, driven by advances in machine learning and neural
networks. Applications like speech recognition and computer vision emerged.
2000s-2010s: The rise of big data and cloud computing enabled new forms of AI, such as natural
language processing and recommendation systems. Companies like Google, Facebook, and Amazon
invested heavily in AI research and development, leading to advancements in areas like robotics and
autonomous vehicles.
2020s and beyond: AI continues to evolve and impact various industries from healthcare and finance
to entertainment and education.

3. Impact of increasing human dependence on AI


Increasing human dependence on AI can raise some serious concerns. Some of these are discussed as
follows:
Privacy and Security: We avail of many free services on the internet, leaving behind a trail of data, but
we are often not made aware of it. An example of a privacy concern is face recognition. Face

© Kips Learning Pvt. Ltd 2024


recognition in photos and videos allows identification, profiling, and searching for individuals, which is
against an individual’s right to privacy and freedom. It can be misused if it falls into the wrong hands.

Accountability: One of the biggest problems brought about by AI decision-making is who should be
blamed or held accountable when an AI causes harm. For example, if a self-driving car makes an
autonomous decision to leave a highway at high speed to avoid an obstacle and crashes into another
vehicle, we cannot take the self-driving car in front of a court to face justice. Even if we did, there are
no legal rules that can be applied to the case.

Job Displacement: The increasing automation of jobs is causing concern that AI will replace individuals
in the workforce.

Threat to Human Rights: AI should not replace jobs that require empathy, emotional connection, care,
and concern for other people. Such jobs include teachers, nurses, social workers, lawyers, judges,
defence personnel, HR managers, etc. These jobs require empathy and a human touch, as well as

ps
providing emotional support and understanding, which makes the concerned people feel valued.

Human Interaction: Increased interaction with AI may affect the relationships that people have with
other humans as people may not accept or prefer their real identities.

4. Increased efficiency and consistency: AI can analyse large amounts of data much faster and more
accurately than humans. For example, AI-powered chatbots can provide customer support 24/7 on
sites, like Amazon and Flipkart, without getting tired or making mistakes.
Error-free work: Humans are likely to make errors while carrying out tasks due to differences in the
Ki
abilities of individuals or their emotional state. AI machines are accurately programmed to carry out
specific tasks and help reduce unnecessary errors and losses. An example is space exploration
programs where there is no scope for errors since the AI-enabled devices must carry out tasks on their
own without instructions from humans. Even a simple error can result in huge losses.

5. A system map shows cause and effect relationship of elements with each other in a system with the
help of arrows. The arrowheads depict the direction of the effect and the sign (+ or -) shows their
relationship. If the arrow goes from X to Y with a +ve sign, it means that both are directly related to
each other. That is, if X increases, Y also increases and vice versa. On the other hand, if the arrow goes
from X to Y with a –ve sign, it means that both the elements are inversely related to each other which
means if X increases, Y will decrease and vice versa.

6. The five stages in the AI project cycle are:

• Problem scoping: The first step is to understand and define the problem that we want AI to solve.
Problem scoping is the stage where we set clear goals and outline the objectives of the AI project.

© Kips Learning Pvt. Ltd 2024


• Data Acquisition: This stage focusses on collecting the relevant data required for the AI system.
Since this data forms the base of your project, care must be taken that the data is collected from
reliable and authentic sources.
• Data Exploration: This stage involves exploration and analyses of the collected data to interpret
patterns, trends and relationships. The data is in large quantities, so in order to understand the
patterns easily, you can use different visual representations such as graphs, databases, flow charts
and maps.
• Modelling: After exploring the patterns, you need to select the appropriate AI model to achieve the
goal. This model should be able to learn from the data and make predictions.
• Evaluation: The selected AI model now needs to be tested and the results need to be compared
with the expected outcome. This helps in evaluating the accuracy and reliability of the model and
improving it.

7. The two types of data used in the AI project cycle are:

• Training Data: Training data is the initial dataset used to train an AI module. It is a set of examples


ps
that helps the AI model learn and identify patterns or perform particular tasks. We must ensure that
the data used to train the AI model is aligned with the problem statement scoped and is sufficient,
relevant, accurate, and wide-ranging.
Testing Data: Testing data is used to evaluate the performance of the AI module. It is data that the
AI algorithm has not seen before and allows us to check the accuracy of the AI module. The testing
data should represent the information that the AI model will encounter practically in real-world
situations.

8. The data features to classify images of animals into different species could be:
Ki
• Color: Distribution and intensity of colors in the images.
• Shape: Shape of the animals.
• Texture: Surface texture like fur, feathers, or scales.
• Size: Proportions and dimensions of animals in the images.
• Patterns: Patterns unique to different species.

9. a. Surveys: A survey is a method of gathering specific information from a group of people by asking
them questions. This enables us to collect valuable data quickly and efficiently. Surveys can be
conducted on paper, through face-to-face or telephone interviews, or through online forms. For
example, population census surveys are conducted once every ten years for population analysis.
b. APIs: APIs are programs used by developers to acquire data from other programs, services or
databases to extract relevant data required for the AI porject. For example if there is an AI project
involving sentiment analysis, developers can use a social media API to access user posts or comments
from Twitter or facebook. Here data acquisition is done automatically through special programs.

© Kips Learning Pvt. Ltd 2024


10. Differences between AI, ML and DL are:

Artificial Intelligence (AI) Machine Learning (ML) Deep Learning (DL)


• The aim of AI is to mimic • The aim of ML is to • The aim of DL is to build
human intelligence to create machines that neural networks to
create intelligent can learn on their own mimic the working of
machines and programs. using data and the human brain and
• It is a broad field of improving over time use complex algorithms
computer science that without being and large volumes of
simulates intelligent programmed for it. data to enable an AI
behaviour. • It is a subset of AI that model to learn.
involves algorithms that • It is a subset of AI and ML
learn patterns from that focuses on training
data. deep neural networks.

ps
11. Differences between rule-based approach and learning-based approach of AI modelling are:

Rule-based approach
• The machine follows the rules defined
by the developer
• AI is achieved through the rule-based
Learning-based approach
• The machine learns on its own from
data
• This is achieved through learning
technique technique
Ki
• Typically uses labeled data • Can handle both labeled and unlabeled
• May require less training time data
• Requires more training time

12. Line chart: A line chart is a chart that is created by plotting a series of points that are connected with
the help of a line and is used to track changes in values over a period of time.
Bar chart: A bar chart is a chart that presents categorical or grouped data with rectangular bars
where the height or length of the bars is proportional to the values that they represent.

13. Pie Chart shows proportions of a whole with sectors proportional to data quantities. It is suitable for
comparing categories within a single dataset.
Area Chart displays trends over time with shaded areas below lines representing data quantities. It is
ideal for visualising changes in data over continuous time intervals or comparing multiple datasets.

© Kips Learning Pvt. Ltd 2024


Unit 2: Data Literacy
A. Short answer type questions.

1. The data pyramid is a hierarchical structure used in data literacy to represent the progress of data
from its raw form to actionable insights. The following is an illustration of the Data Pyramid, which is
made up of the different stages in the process of working with data:

2.
ps
Quantitative data interpretation is made on numeric data. It helps us answer questions involving
‘when’, ‘how many’, and ‘how often’. For example, (how many) number of likes on the Instagram
post. It can be expressed using finite numbers.

3. Textual data interpretation involves analysing and drawing conclusions from non-numeric data, such
Ki
as written text from a variety of sources (social media posts, surveys, polls).

4. The process of continuously acquiring, developing, and improving the ability to understand, interpret,
and use data effectively is called cultivating data literacy. data literacy gives you the ability to analyse
and get valuable insights from the massive amount of data that surrounds us in our daily lives.

5. Data acquisition is the process of acquiring or collecting accurate and reliable data from relevant
sources. The collected data is used for decision-making, analysis, forecasting, and visualisation.

6. Data visualisation is a technique that provides a better understanding of data and helps in gaining
insights from it. Data visualisation is a broad term that includes any graphic that helps you
understand or gain new insights from data.

7. Tableau is a popular data visualisation tool. It transforms the way you use data to solve the problems.
It is used to create charts, graphs, and dashboards, making data more comprehensible and
actionable.

© Kips Learning Pvt. Ltd 2024


8. Data features are the characteristics or properties of the data. They describe each piece of
information in a dataset. They are also called variables. For example, in a table of student records,
features could include things like the student’s name, age, or grade.

B. Long answer type questions.


1. Data security means securing or protecting data or digital information from any unauthorised
access or misuse. Data security is important because, if a data breach occurs, an organisation can
face a court case, fines, and reputational damage. Data security provides protection against
monetary losses and interruptions to operations. It ensures that data remains accurate and
accessible to authorised users.

2. Data acquisition comprises three key steps:





ps
Data Discovery: Searching for new datasets
Data Augmentation: Adding more data to the existing data
Data Generation: Generating data if data is not available

3. Quantitative Data Interpretation: Quantitative data interpretation is made on numeric data.


Quantitative data interpretation helps us answer questions involving ‘when’, how many, and ‘
how often. For example, (how many) number of likes on the Instagram post. Methods used for
analysis involve assessment, tests, polls, surveys, etc.
Qualitative Data Interpretation: Qualitative data is uncountable, which tells about the emotions
Ki
and feelings of people. Qualitative data interpretation is focused on insights and motivations of
people. It helps us answers questions involving ‘how’ and ‘why’. For example, why do students like
attending online classes? Methods used for analysis involve interviews, focus groups, etc.

4. Data interpretation is important for several reasons:


• Informed Decision-making: Interpreting data allows you to make informed decisions, i.e.,
when you have the correct knowledge, you can make the right decision. For example, by
knowing the average height of students, the school can custom design the chairs and tables
according to the requirements of the class.

• Reduce Cost: Identifying needs can lead to a reduction in cost. For example, a restaurant
owner could decide to drop/modify some dishes on the menu that are not popular or have
got bad reviews.

• Identifying Needs: You can identify the needs of people by data interpretation. For example,
Veg Farmhouse Pizza is a popular choice among the age group 8-10.

© Kips Learning Pvt. Ltd 2024


5. The data literacy framework includes the following key components:
• Plan: Planning is like building the foundation of a house. It is the first step in any program.
Setting clear goals and objectives gives direction and purpose to the program.
• Communicate: Good communication is important to ensure that everyone understands
what the data literacy program is about. It helps stakeholders know the program's goals and
what is expected of them.
• Assess: Assessing how comfortable people are with data and data tools is crucial. It helps us
understand where they are starting from and what they need to learn.
• Develop culture: Building a data-literate culture means making data skills a natural part of
how things work in an organisation. It is about encouraging and supporting everyone to get
better at using data.
• Prescriptive learning: Offering different ways of learning helps everyone learn better. It
provides resources and activities that suit different learning styles and preferences.
• Evaluate: Evaluating the effectiveness of the data literacy program is essential for measuring
progress, identifying areas for improvement, and demonstrating the program's impact.

ps
6. The data pyramid is a hierarchical structure used in data literacy to represent the progress of data
from its raw form to actionable insights. Different stages of a data pyramid, starting from bottom
are data, information, knowledge and wisdom. Let us move from bottom to top to understand the
different stages of Data Pyramid:
• Initially, data exists in its raw form, which is not very useful. Example: It is like scattered


pieces of a puzzle or a pile of ingredients before cooking a meal.

Data is processed through various methods, like analysing and organising raw data, to
provide meaningful information. Processing data makes it easier to understand and
interpret. Example: It is like arranging the scattered pieces of a puzzle or using the
Ki
ingredients to make a delicious dish.

• The processed data transforms information to knowledge. It helps understand how things
are happening in the world around us. Example: It is like understanding how joining the
scattered pieces of a puzzle reveals the complete image, or understanding what
ingredients and steps are involved in making a dish.

• Wisdom takes us a step forward by providing an understanding of things happening in a


particular way. In short, wisdom allows you to understand the important reasons or causes
behind the trends or patterns that you observe. Example: It is like understanding the
strategy behind joining the pieces of a puzzle in a specific order to get the complete image.
This deep understanding helps you not just see the picture but also understand the method
and reasoning behind putting the pieces together.

7. Independent features: These variables are the input to the model. They are the information you
provide to make predictions. These variables are also called the predictor or input variables. These
features are not influenced by other variables, but they are used to determine the outcome. There

© Kips Learning Pvt. Ltd 2024


can be one or many independent features available in a dataset. For example, predicting the
health of a patient, the independent features are as follows, weight, age, blood pressure,
cholesterol, etc.

Dependent features: These variables are the outputs or results of the model. They are what you
are trying to predict. These variables are also called the response or outcome variables. The value
of the dependent feature is influenced by the independent feature. Typically, there is only one
dependent feature available, though you can have more than one in some cases. For example,
considering the same example of predicting the health of a patient, the dependent features will be
based on the input variables. In this case, the dependent feature will decide whether the patient
has the disease or not.

8. Graphical data interpretation involves the analysis of data represented in the form of graphs, like
bar graphs, line graphs, pie graphs, etc.

Bar Graph: In a bar graph, data is represented using vertical and horizontal bars.

ps
Pie Charts: Pie charts have the shape of a pie, and each slice of the pie represents a portion of the
entire pie, allocated to each category. It is a circular chart divided into various sections (think of a
cake cut into slices). Each section of the pie chart is proportional to the corresponding value.
Ki

© Kips Learning Pvt. Ltd 2024


Unit 3: Math for AI
A. Short answer type questions.

1. Mathematics in AI is important for the following reasons:


Algorithms: Every AI algorithm uses some mathematical concepts to learn from data. events
based on historical data.
Data: An important pillar of AI models is data. All data on a computer is represented in the form of
numbers.

2. Patterns are the ordered sequence, relation, or arrangement between objects. Patterns exist all
around us. Examples: Table of any number: 2,4,6,8,10,12,14.

3. Calculus helps in the training and optimisation of machine learning models. Calculus provides
different minimisation functions, which are used in AI algorithms, to find the best solutions to
problems.

ps
4. Certain Event: An event for which you are sure that it will happen.

For example, the rising of the sun and moon is certain.


If a bag contains only red balls, then the event of drawing a red ball from the bag is certain.

5. Statistics has a wide range of applications across various fields, like healthcare and medicine,
business and economics, social science, engineering and technology, and so on. Statistics plays a
crucial role in predicting the performance of the sports team, analysing the opinions of the voters,
Ki
understanding the reading levels of students, etc.

6. Some of the applications of statistics are:

Disease Prediction: A very good example of disease prediction through statistics is the Covid-19
pandemic. The government analysed the areas where COVID cases were increasing or where the
vaccination drive needed to be improved.

Weather Forecasting: Weather forecasting is also done with the help of statistical analysis. The
future weather conditions are predicted with the help of data of past weather conditions.

7. Tally marks are used to represent data in graphical form in statistics. They help in observing or
exploring data. The graph table has one vertical line, which is made for each of the first four
numbers, and the fifth number is represented by a diagonal line across the previous four.

8. Probability is a way to tell how likely something is to happen. The following equation defines the
likelihood of an event happening:
Probability of an Event = Number of Favourable Outcomes / Total Number of Possible Outcomes

© Kips Learning Pvt. Ltd 2024


9. Application of probability is found in a variety of fields. Some of these are:
• Probability can be used to determine the travel patterns and demand between different
origins and destinations.
• Probability can be used to estimate the demand for public transportation based on
population demographics, land use, and existing transport networks.

10. Role of probability in traffic estimation


Some of the instances of traffic estimation, where probability plays an important role are given as
follows.
• People often use probability when they decide to drive to someplace.
• Based on the time of day, location in the city, weather conditions, etc., people tend to
make predictions about how bad traffic will be during a certain time.
• For example, if you think there is a 90% probability that traffic will be heavy from 6 pm to
7:30 pm in your vicinity, then you may decide to wait during that time.
• Probability can be used to estimate traffic volume at a particular road segment.

ps
B. Long answer type questions.
1. Statistics in disaster management are used as follows:
Risk assessment: By analysing historical data, statistical models can assess risks of hazard
occurrence and alert the people who might be affected by them.

Early warning: By using weather forecasts, seismic activity, hydrological data, and satellite
imagery, statistical models can be used to predict the chances of the occurrence of hazards, which
may help authorities take preventive decisions.
Ki
Resource allocation and logistics: By analysing demographic data, population density,
infrastructure maps, and transportation networks, statistics can help in taking decisions regarding
the need for emergency supplies, equipment, and facilities.

Public health monitoring: Statistical modeling helps forecast disease transmission, assess the
effectiveness of treatments, and guide public health response strategies.
2.
No. of books borrowed Tally marks Frequency Count

1 IIII 4

2 IIII 4

3 III 3

4 II 2

5 II 2

© Kips Learning Pvt. Ltd 2024


2. Probability is used for weather forecasting to show the likelihood of rain, snowfall, clouds,
temperature, humidity, etc. Some important applications of probability in weather forecasting are
as follows:

•30% chance of rain means there is a 30% probability that at least 0.01 inches of rain will fall
somewhere in the forecast area.
• A forecast might state there is a 70% chance of temperatures falling between 65°F and
75°F.
• Probability is used to predict extreme weather events such as storms, tornadoes, and
floods, providing probabilities for the occurrence and potential impact of these events.
• For example, a forecast might predict a 10% chance of a tornado forming in a specific
region.
• Long term climate prediction can be done with the help of probability. For example,
probability of rise in temperature or the probability of rainfall in a particular season
in a particular area.
• An advisory message can be circulated if there is an 80% probability that temperatures

3.
ps
will exceed a certain threshold.

1. The Sun rises in the east


2. It will rain during the monsoon season
3. A person will live for 1000 years
4. When I toss a coin, I will get head or tail
Certain
Likely
Impossible
Equal Probability
Ki

© Kips Learning Pvt. Ltd 2024


Unit 4: Generative AI
A. Short answer type questions.
1. Generative AI is an advanced form of AI that creates new content in the form of images, text, and
audio, based on the data it has learned from.

2. Two benefits of Generative AI are:


• Creativity: Generative AI tools can generate different types of content in a very attractive and
creative manner.
• Efficiency: AI tools can generate summarised and precise information, which may be very
useful and save a lot of time.

3. The limitations of generative AI are as follows:


• Data bias: AI models are trained with the help of a huge amount of data. If the data used for
training is biased, then the results produced by the model may not be of any use or may be
inaccurate.

ps
• Reuse of existing data: Generative AI cannot create entirely new concepts or ideas from
scratch. It remixes and rearranges existing information to generate output.

4. Use of Chat GPT and Gemini:


These are popular generative AI tools. They allow you to enter a prompt for content creation.
Nowadays, they are everyday AI companions. They let you search for specific information, generate
new text (such as emails and summaries), or create images based on text prompts you write. For
example, you can ask these tools to summarise a memo in two sentences, and they will provide a
concise written summary.
Ki
5. Artbreeder is an AI-powered platform that enables users to generate new images by combining
different generative adversarial network (GAN) models. Users can select and combine different GAN
models to create new and unique images. This platform enables users to create artwork, character
designs, landscapes, and more by combining and tweaking different elements.

6. Runway ML is a creative platform that offers a collection of pre-trained machine learning models
that can be used for many tasks, like image and video generation, style transfer, object detection,
and more. It provides an easy-to-use interface for building, training, and deploying various types of
generative models, including GANs, VAEs, and image classifiers. It enables users to incorporate AI
into their creative workflows without needing extensive programming knowledge.

7. Example of Supervised Learning: Suppose you have trained a machine (model) with the help of a
machine learning algorithm and images of animals, with each image labeled to indicate the animal it
depicts: 'Elephant', 'Camel', and 'Cow'. This means the model was trained using the supervised
machine learning method. After training, this model will be able to identify animals in new images
that you provide as input. Notice the given figure, where the provided input image allows the model
to correctly identify the 'Elephant', 'Camel', and 'Cow'.

© Kips Learning Pvt. Ltd 2024


8. Generative Adversarial Network uses two neural networks that work in collaboration to generate
new data. The names of these two neural networks are generator network and discriminator
networks. The task of the generator network is to produce new data, while the task of the
discriminator network is to analyse the generated data and provide feedback.

9. Variational Autoencoders (VAEs) are another class of generative models are VAEs. VAEs learn the
distribution of data and then generate new data with it. They take data (images, text, etc.) and
condense it into a smaller, more manageable form. This compressed version captures the essence
of the data. Then, the VAEs can use this compressed form to generate entirely new data points that
resemble the originals, but with a twist of creativity. This makes VAEs useful for generating new
content that looks like the original data on which they were trained.

10. Autoencoders are neural networks that are trained to learn a compressed representation of data.
First, the data is compressed and then decompressed to restore it to its original form. They
generate highly realistic samples.

ps
B. Long answer type questions.

1. Supervised learning works on labelled datasets. It is a task-driven process where models learn to
make predictions or decisions based on input data and corresponding output labels. In a supervised
learning model, we divide the data into a training dataset and a test dataset. The training dataset is
used to train the model, while the test dataset is used to evaluate the model's accuracy in
predicting output. Unsupervised learning works on unlabelled datasets. It is a data-driven process
that allows the algorithm to explore the data and find its own insights by identifying relationships,
patterns, and trends. Its goal is to find similarities and differences between data points.
Ki
2. The benefits of generative AI are as follows:
Creativity: Generative AI tools can generate different types of content in a very attractive and
creative manner. For example, content can be generated in the form of an image, video, or audio,
which is a better medium of communication as compared to textual information.
Efficiency: AI tools can generate summarised and precise information, which may be very useful
and save a lot of time.
Personalisation: Generative AI tools can generate content in the form of text, image, or video as per
the user's requirements. For example, if you have an imagination of a scene that may have a red
rose in a river, you can generate that scene.
Exploration: Generative AI can be used to explore new design spaces or optimise complex
systems, such as designing new medicines or improving industrial processes. It helps in developing
and giving strength to an idea because it can draw your imagination. Also, generative AI tools can be
used to generate huge datasets for training and testing other AI models.
Accessibility: Many generative AI tools are freely available and easy to use. By entering a prompt
into the tool, you get whatever you want. For example, tools like GPT-3 can write articles, generate
marketing copy, and draft emails.

© Kips Learning Pvt. Ltd 2024


Scalability: Generative AI can quickly and efficiently produce large amounts of content, providing a
scalable solution for businesses and organizations that need high-volume content production.

3. Differences between generative AI and traditional AI are as follows:


Basis Generative AI Traditional AI

Goal Generative AI creates new content Conventional AI analyses,


that resembles the training data. processes, and classifies data.

Training Generalised generative AI models Conventional AI models can


are prepared using a huge amount of be prepared using less data,
data and multiple complex Python limited Python libraries
libraries, like ChatGPT. (TensorFlow, Keras, scikit-
Also, a large amount of processing learn), and minimal
capacity is required because the processing capacity.
data is very large.
Output Generative AI can produce fresh, Conventional AI produces

ps
Applications
innovative, and often unexpected
output.

Applications of generative AI include


generating music, literature, code,
design, images, videos, etc.
more predictable output
based on existing data.

Conventional AI is used for


personalised applications
such as assessing students'
performance in education,
predicting product prices
in the business and
identifying diseases in the
Ki
medical field.

4. The following are the different applications of generative AI in music generation:


• Generative AI can be used to create melodies and harmonies.
• Generative AI can help songwriters by generating lyrics based on specific themes,
keywords, or phrases.
• AI can suggest instrumentation and orchestration arrangements for a specific song.
• Generative AI can be used during live performances to create real-time music.
• AI can create realistic environmental sounds.

5. Some of the major limitations of generative AI are:


Data bias: AI models are trained with the help of a huge amount of data. If the data used for
training is biased, then the results produced by the model may not be of any use or may be
inaccurate.

© Kips Learning Pvt. Ltd 2024


Uncertainty: Sometimes the results produced by generative AI models are unpredictable and
uncertain.

Reuse of existing data: Generative AI cannot create entirely new concepts or ideas from
scratch. It remixes and rearranges existing information to generate output.

Limited understanding of context: It cannot understand long contexts. The question that you
asked two hours ago may not be linked to the question that you are asking right now.

Ethical issues: The ability of generative AI to generate realistic content raises ethical concerns.

High computational demands: It is easy to use generative AI models. But it is very complex to
develop generative AI models because they require huge amount of data and processing
capacity.

6. While generative AI offers many benefits, several ethical considerations should be considered

ps
when using this technology.
Ownership: It is always a concern to determine who will be the owner of the content generated
by generative AI. This is particularly relevant in creative fields, such as music, literature, or art,
where generative AI can create original work, although it may be based on some existing
content.
Human autonomy: Uncontrolled use of generative AI may create a question mark on human
autonomy. As technology becomes more sophisticated, it may become increasingly difficult to
distinguish between content generated by humans and machines.
Biased outcome: If the data used for training is biased, then the generative AI model will also
Ki
generate biased outcomes. This may lead to inaccurate results.
Misuse of tools: There are many generative AI tools, like Deepfake, that can be used to generate
fake images, videos, and content that can be misused for criminal activity.
Privacy: Generative AI can potentially be used to generate sensitive personal information, such
as credit card numbers, social security numbers, or medical records. This could be used for
malicious purposes.

7. Generative Adversarial Networks (GANs): Generative Adversarial Network uses two neural
networks that work in collaboration to generate new data. The names of these two neural
networks are generator network and discriminator network. The task of the generator network is
to produce new data, while the task of the discriminator network is to analyse the generated data
and provide feedback. The generator network generates the data until it receives feedback from
the discriminator network that the generated data is like real data. Examples: Generating fake
images, creating portraits of non-existing people, converting images from day to night, generating
images based on textual descriptions, generating realistic videos, etc.

Variational Autoencoders (VAEs): Another class of generative models is VAEs. VAEs learn the
distribution of data and then generate new data with it. They take data (images, text, etc.) and

© Kips Learning Pvt. Ltd 2024


condense it into a smaller, more manageable form. This compressed version captures the essence
of the data. Then, the VAEs can use this compressed form to generate entirely new data points
that resemble the originals, but with a twist of creativity. This makes VAEs useful for generating
new content that looks like the original data on which they were trained. Examples: Generating
new images like the given training set, image reconstruction, generating drafts for writers,
generating new sounds and music composition, etc.

8. Architecture: Generative AI can create innovative and complex architectural designs that may
not be easily conceived by human designers. AI can generate realistic 3D models and
visualizations of architectural designs, helping architects and clients better understand the
outcome.
Coding: Generative AI can analyze code and detect potential bugs or vulnerabilities, enhancing
the overall quality and security of software. AI-powered coding assistants can help learners
understand programming concepts and provide real-time feedback on their code.
Music: Generative AI is also being used to create new music, create original compositions, and

ps
even collaborate with AI to produce innovative music. AI can suggest instrumentation and
orchestration arrangements for a specific song.
Ki

© Kips Learning Pvt. Ltd 2024

You might also like