0% found this document useful (0 votes)
11 views

ML Lab File Vijay Kumar

The document outlines two experiments focused on Python programming, covering fundamental concepts such as classes, functions, data structures, loops, and exception handling in Experiment 1. Experiment 2 emphasizes the use of Python libraries like NumPy and Pandas for data analysis, including array operations, statistical measures, and DataFrame manipulations. The document provides code examples and explanations for various functionalities, demonstrating a comprehensive understanding of Python's capabilities.

Uploaded by

Rishit Goel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views

ML Lab File Vijay Kumar

The document outlines two experiments focused on Python programming, covering fundamental concepts such as classes, functions, data structures, loops, and exception handling in Experiment 1. Experiment 2 emphasizes the use of Python libraries like NumPy and Pandas for data analysis, including array operations, statistical measures, and DataFrame manipulations. The document provides code examples and explanations for various functionalities, demonstrating a comprehensive understanding of Python's capabilities.

Uploaded by

Rishit Goel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 27

EXPERIMENT - 1

AIM - To understand and demonstrate the fundamental functionalities of the Python


programming language.

THEORY - Python is a high-level, interpreted programming language known for its simplicity
and readability. It is widely used in various fields such as web development, data analysis,
machine learning, automation, and more. Python's syntax is designed to be easy to read and
write, making it an excellent choice for beginners and experienced programmers alike.

1. Classes and Objects: Python is an object-oriented programming (OOP) language,


which means it supports the concepts of classes and objects. A class is a blueprint for
creating objects, which are instances of the class. Classes encapsulate data and
functions that operate on that data. Objects are instances that hold the actual data and
can use the class's methods.

class Person:
def __init__(self, name, age):
self.name = name
self.age = age

def greet(self):
return f'Hello, my name is {self.name} and I am {self.age}
years old.'

# Create an instance of the class


person = Person('Vijay', 19)
print(person.greet())

2. Functions: Functions are blocks of reusable code that perform a specific task. They
allow for modular and organized code, making it easier to manage and debug. Python
functions are defined using the def keyword followed by the function name and
parameters.

def add(a, b):


return a + b

def subtract(a, b):


return a - b

# Using the functions


print(add(5, 3)) # Output: 8
print(subtract(5, 3)) # Output: 2

3. Data Structures: Python provides several built-in data structures to store and
manipulate data efficiently:
● Lists: Ordered collections of items that can be of different types. Lists are mutable,
meaning their elements can be changed.

# Creating a list
numbers = [1, 2, 3, 4, 5]

# Looping through the list


for number in numbers:
print(number * 2) # Output: 2, 4, 6, 8, 10

● Dictionaries: Unordered collections of key-value pairs. Each key in a dictionary is


unique, and values can be accessed using the corresponding key.

# Creating a dictionary
student_grades = {'Alice': 90, 'Bob': 85, 'Charlie': 92}

# Accessing dictionary values


for student, grade in student_grades.items():
print(f'{student} has a grade of {grade}.')

● Tuples: Tuples are similar to lists, but they are immutable, meaning their elements
cannot be changed once defined.

# Creating a tuple
fruits = ('apple', 'banana', 'cherry')

# Accessing elements
print(fruits[0]) # Output: apple

# Iterating over a tuple


for fruit in fruits:
print(fruit)

# Output:
# apple
# banana
# cherry
● Sets: Sets are unordered collections of unique elements. They are useful for
membership testing and removing duplicates from a sequence.

# Creating a set
my_set = {1, 2, 3, 4, 5}

# Adding elements to a set


my_set.add(6)
print(my_set) # Output: {1, 2, 3, 4, 5, 6}

# Removing elements from a set


my_set.remove(3)
print(my_set) # Output: {1, 2, 4, 5, 6}

# Checking for membership


print(4 in my_set) # Output: True

# Iterating over a set


for element in my_set:
print(element)

● Queues: Queues follow the First-In-First-Out (FIFO) principle. We can use the deque
(double-ended queue) from the collections module to implement a queue.

from collections import deque

# Creating a queue
queue = deque()

# Adding elements to the queue


queue.append('first')
queue.append('second')
queue.append('third')

# Removing elements from the queue


print(queue.popleft()) # Output: first
print(queue.popleft()) # Output: second

# Checking the current state of the queue


print(queue) # Output: deque(['third'])
4. Loops: Loops are used to iterate over sequences (such as lists) or perform a task
repeatedly. Python supports for and while loops. The for loop iterates over a
sequence, while the while loop continues until a specified condition is met.

# Using a for loop


for i in range(5):
print(i) # Output: 0, 1, 2, 3, 4

# Using a while loop


count = 0
while count < 5:
print(count) # Output: 0, 1, 2, 3, 4
count += 1

5. Exception Handling: Exception handling is a mechanism to handle runtime errors


gracefully. Python uses try, except, and finally blocks to catch and manage exceptions.
This helps prevent the program from crashing due to unexpected errors and allows for
proper error handling and resource management.

try:
# Try to open a file
with open('non_existent_file.txt', 'r') as file:
content = file.read()
except FileNotFoundError:
print('The file was not found.')

try:
result = 10 / 0
except ZeroDivisionError:
print('Cannot divide by zero.')

EXPERIMENT - 2
AIM - To use Python libraries (Pandas, NumPy, Matplotlib, SciPy, and Scikit-Learn) to
load, clean, visualize, analyze, and make predictions on data, demonstrating a
straightforward data analysis workflow.

THEORY –

1. NumPy stands for "Numerical Python." It's a powerful library in Python for
numerical and scientific computing. At its core, NumPy provides support for arrays
(grids of values) and a collection of functions to operate on these arrays.

Creating Arrays - Arrays are the basic data structure in NumPy. They can be one-
dimensional (like a list) or multi-dimensional (like a matrix).

1. import numpy as np
2.
3. # Creating a 1-dimensional array
4. array_1d = np.array([1, 2, 3, 4, 5])
5. print("1D array:", array_1d)
6.
7. # Creating a 2-dimensional array
8. array_2d = np.array([[1, 2, 3], [4, 5, 6]])
9. print("2D array:\n", array_2d)
10.

OUTPUT –

Array Operations - NumPy provides functions to create arrays filled with zeros, ones,
or a range of numbers. These operations are useful for initializing arrays.

1. # Array of zeros
2. zeros_array = np.zeros((3, 3))
3. print("Zeros array:\n", zeros_array)
4.
5. # Array of ones
6. ones_array = np.ones((2, 4))
7. print("Ones array:\n", ones_array)
8.
9. # Array with a range of values
10. range_array = np.arange(0, 10, 2)
11. print("Range array:", range_array)
12.
13. # Array with evenly spaced values
14. linspace_array = np.linspace(0, 1, 5)
15. print("Linspace array:", linspace_array)
16.

OUTPUT –

Array Reshaping - Reshaping allows changing the shape of an array without


changing its data. This is useful when you need a different dimensional structure for
your data.

1. original_array = np.arange(12)
2. reshaped_array = original_array.reshape((3, 4))
3. print("Original array:", original_array)
4. print("Reshaped array:\n", reshaped_array)
5.

OUTPUT –
Basic Arithmetic Operations - NumPy allows for element-wise arithmetic operations
on arrays, which means you can perform operations like addition, subtraction,
multiplication, and division on corresponding elements of arrays.

1. a = np.array([1, 2, 3])
2. b = np.array([4, 5, 6])
3.
4. # Element-wise addition
5. print("Addition:", a + b)
6.
7. # Element-wise subtraction
8. print("Subtraction:", a - b)
9.
10. # Element-wise multiplication
11. print("Multiplication:", a * b)
12.
13. # Element-wise division
14. print("Division:", a / b)
15.

OUTPUT –
Statistical Operations - NumPy provides functions to calculate statistical measures
such as mean, sum, standard deviation, minimum, and maximum on arrays.

1. stats_array = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])


2.
3. print("Mean:", np.mean(stats_array))
4. print("Sum:", np.sum(stats_array))
5. print("Standard Deviation:", np.std(stats_array))
6. print("Minimum:", np.min(stats_array))
7. print("Maximum:", np.max(stats_array))
8.

OUTPUT –

Indexing and Slicing - Indexing and slicing in NumPy allows accessing and
modifying specific elements or subsets of an array. This is similar to list indexing and
slicing in Python but more powerful for multi-dimensional arrays.

1. array = np.array([10, 20, 30, 40, 50])


2.
3. # Indexing
4. print("Element at index 2:", array[2])
5.
6. # Slicing
7. print("Elements from index 1 to 3:", array[1:4])
8.

OUTPUT –

Broadcasting - Broadcasting allows NumPy to perform element-wise operations on


arrays of different shapes. This feature eliminates the need to explicitly reshape
arrays for compatible operations.

1. array1 = np.array([1, 2, 3])


2. array2 = np.array([[4], [5], [6]])
3.
4. # Broadcasting addition
5. result = array1 + array2
6. print("Broadcasting result:\n", result)
7.

OUTPUT –

Linear Algebra - NumPy supports various linear algebra operations, such as matrix
multiplication, transpose, inverse, and determinant. These operations are essential for
scientific computing and machine learning.

1. # Creating matrices
2. matrix1 = np.array([[1, 2], [3, 4]])
3. matrix2 = np.array([[5, 6], [7, 8]])
4.
5. # Matrix multiplication
6. matrix_product = np.dot(matrix1, matrix2)
7. print("Matrix product:\n", matrix_product)
8.
9. # Transpose of a matrix
10. transpose_matrix = np.transpose(matrix1)
11. print("Transpose of matrix1:\n", transpose_matrix)
12.

OUTPUT –
2. Pandas provides two primary data structures: Series (1-dimensional) and
DataFrame (2-dimensional). These structures are optimized for handling and
analyzing data, making it easier to perform operations like filtering, grouping, and
statistical analysis.

1. Series: A one-dimensional array-like object that can hold any data type (e.g.,
integers, strings, floats). It’s similar to a column in a spreadsheet.

1. import pandas as pd
2.
3. # Creating a Series
4. series = pd.Series([10, 20, 30, 40, 50])
5. print("Series:\n", series)
6.
7. # Creating a DataFrame
8. data = {
9. 'Name': ['Alice', 'Bob', 'Charlie', 'David'],
10. 'Age': [25, 30, 35, 40],
11. 'City': ['New York', 'Los Angeles', 'Chicago', 'Houston']
12. }
13. df = pd.DataFrame(data)
14. print("DataFrame:\n", df)
15.

2. DataFrame: A two-dimensional, size-mutable, and potentially heterogeneous


tabular data structure with labeled axes (rows and columns). It's like an Excel
spreadsheet or a SQL table.

1. # Selecting a column
2. print("Selecting 'Age' column:\n", df['Age'])
3.
4. # Adding a new column
5. df['Salary'] = [50000, 60000, 70000, 80000]
6. print("DataFrame with new 'Salary' column:\n", df)
7.
8. # Filtering rows based on a condition
9. filtered_df = df[df['Age'] > 30]
10. print("Filtered DataFrame (Age > 30):\n", filtered_df)
11.

Selecting Columns and Rows

1. # Selecting a column
2. print("Selecting 'Age' column:\n", df['Age'])
3.
4. # Selecting multiple columns
5. print("Selecting 'Name' and 'City' columns:\n", df[['Name', 'City']])
6.
7. # Selecting a row by index
8. print("Selecting the first row:\n", df.iloc[0])
9.
10. # Selecting a row by condition
11. print("Selecting rows where Age > 30:\n", df[df['Age'] > 30])
12.

OUTPUT –

This part demonstrates how to select specific columns and rows. You can select
single or multiple columns and filter rows based on conditions using boolean indexing.

Adding, Removing, and Handling Missing Data

1. # Adding a new column


2. df['Salary'] = [50000, 60000, 70000, 80000]
3. print("DataFrame with new 'Salary' column:\n", df)
4.
5. # Removing a column
6. df = df.drop(columns=['Salary'])
7. print("DataFrame after removing 'Salary' column:\n", df)
8.
9. # Adding NaN values
10. df_with_nan = df.copy()
11. df_with_nan.loc[1, 'Age'] = None
12. print("DataFrame with NaN:\n", df_with_nan)
13.
14. # Handling missing values
15. # Filling NaN with a specific value
16. df_filled = df_with_nan.fillna(0)
17. print("DataFrame after filling NaN with 0:\n", df_filled)
18.
19. # Dropping rows with NaN values
20. df_dropped = df_with_nan.dropna()
21. print("DataFrame after dropping rows with NaN:\n", df_dropped)
22.
23.

Here, we cover adding and removing columns, and handling missing data. You can
introduce NaN (missing) values, fill them with a specific value, or drop rows containing
NaN values.

OUTPUT –
Sorting and Grouping Data

1. # Sorting by a single column


2. df_sorted = df.sort_values(by='Age')
3. print("DataFrame sorted by Age:\n", df_sorted)
4.
5. # Sorting by multiple columns
6. df_sorted_multi = df.sort_values(by=['City', 'Age'], ascending=[True, False])
7. print("DataFrame sorted by City and Age:\n", df_sorted_multi)
8.
9. # Grouping by a single column
10. grouped_df = df.groupby('City').mean()
11. print("Grouped DataFrame (mean by City):\n", grouped_df)
12.
13. # Grouping by multiple columns
14. grouped_multi_df = df.groupby(['City', 'Age']).count()
15. print("Grouped DataFrame by City and Age:\n", grouped_multi_df)a
16.

OUTPUT –

This section shows how to sort and group data. You can sort data by single or multiple
columns and group data to calculate aggregate functions like mean or count.

1. Merging and Joining DataFrames


2. # Creating another DataFrame to merge
3. data2 = {
4. 'Name': ['Alice', 'Bob', 'Charlie', 'David'],
5. 'Gender': ['F', 'M', 'M', 'M']
6. }
7. df2 = pd.DataFrame(data2)
8.
9. # Merging DataFrames
10. df_merged = pd.merge(df, df2, on='Name')
11. print("Merged DataFrame:\n", df_merged)
12.
13. # Joining DataFrames
14. df3 = df.set_index('Name')
15. df4 = df2.set_index('Name')
16. df_joined = df3.join(df4)
17. print("Joined DataFrame:\n", df_joined)
18.

OUTPUT –

Here, we cover merging and joining DataFrames. You can merge DataFrames based
on a common column using the merge method, or join DataFrames using the join
method when they share a common index.
Reading and Writing Data

1. # Reading data from a CSV file


2. df_from_csv = pd.read_csv('path_to_file.csv')
3. print("DataFrame from CSV:\n", df_from_csv)
4.
5. # Writing data to a CSV file
6. df.to_csv('output.csv', index=False)
7. print("DataFrame written to 'output.csv'")
8.

This part demonstrates how to read data from and write data to CSV files using
read_csv and to_csv methods. These methods are useful for importing and exporting
data.

3. Matplotlib is a comprehensive library for creating static, animated, and interactive


visualizations in Python. It is particularly useful for generating plots, charts, and
figures to help visualize data. Here are a few key components of Matplotlib:

1. Figure: The overall window or page that everything is drawn on. It can contain
multiple plots.

2. Axes: The area on which data is plotted. A single figure can have multiple axes
(plots) arranged in a grid.

3. Plot: The actual visual representation of data, such as a line plot, scatter plot,
bar chart, etc.

Creating a Simple Line Plot –

1. import matplotlib.pyplot as plt


2.
3. # Creating data
4. x = [1, 2, 3, 4, 5]
5. y = [2, 3, 5, 7, 11]
6.
7. # Creating a line plot
8. plt.plot(x, y)
9. plt.xlabel('X-axis')
10. plt.ylabel('Y-axis')
11. plt.title('Simple Line Plot')
12. plt.show()
13.
This code creates a basic line plot with x and y data points. The plt.plot() function is
used to create the plot, and plt.show() displays it.

OUTPUT –

Creating a Scatter Plot

1. # Creating data
2. x = [1, 2, 3, 4, 5]
3. y = [2, 3, 5, 7, 11]
4.
5. # Creating a scatter plot
6. plt.scatter(x, y)
7. plt.xlabel('X-axis')
8. plt.ylabel('Y-axis')
9. plt.title('Simple Scatter Plot')
10. plt.show()
11.

A scatter plot is created using the plt.scatter() function. It is useful for visualizing the
relationship between two variables.

OUTPUT –
Creating a Bar Chart
1. # Creating data
2. categories = ['A', 'B', 'C', 'D']
3. values = [4, 7, 1, 8]
4.
5. # Creating a bar chart
6. plt.bar(categories, values)
7. plt.xlabel('Categories')
8. plt.ylabel('Values')
9. plt.title('Simple Bar Chart')
10. plt.show()
11.
12.

A bar chart is created using the plt.bar() function. It is useful for comparing different
categories of data.

OUTPUT –
Creating a Histogram
1. # Creating data
2. data = [1, 2, 2, 3, 3, 3, 4, 4, 4, 4, 5, 5, 5, 5, 5]
3.
4. # Creating a histogram
5. plt.hist(data, bins=5)
6. plt.xlabel('Data')
7. plt.ylabel('Frequency')
8. plt.title('Simple Histogram')
9. plt.show()
10.
11.

A histogram is created using the plt.hist() function. It is useful for visualizing the distribution of
a dataset.

OUTPUT –
4. SciPy stands for "Scientific Python" and is an open-source Python library used for scientific
and technical computing. It builds on NumPy and provides a large collection of mathematical
algorithms and convenience functions, making it easier to perform scientific and engineering
tasks. Here are a few key components of SciPy:
1. Linear Algebra: Provides functions for matrix operations, solving linear systems,
eigenvalue problems, and more.
2. Optimization: Contains functions for finding the minimum or maximum of functions
(optimization), including linear programming and curve fitting.
3. Integration: Offers methods for calculating integrals, including numerical integration and
ordinary differential equations (ODE) solvers.
4. Statistics: Includes functions for statistical distributions, hypothesis testing, and
descriptive statistics.
5. Signal Processing: Provides tools for filtering, signal analysis, and Fourier transforms.

Linear Algebra
1. import numpy as np
2. from scipy import linalg
3.
4. # Creating a matrix
5. A = np.array([[1, 2], [3, 4]])
6.
7. # Computing the determinant
8. det = linalg.det(A)
9. print("Determinant:", det)
10.
11. # Solving a linear system of equations
12. b = np.array([5, 6])
13. x = linalg.solve(A, b)
14. print("Solution:", x)
15.

This code demonstrates how to compute the determinant of a matrix and solve a linear system
of equations using SciPy's linear algebra module

OUTPUT –

Optimization
1. from scipy.optimize import minimize
2.
3. # Defining the objective function
4. def objective(x):
5. return x**2 + 5*np.sin(x)
6.
7. # Finding the minimum
8. result = minimize(objective, x0=0)
9. print("Minimum:", result.x)
10.

This code demonstrates how to find the minimum of a function using SciPy's optimization
module.

OUTPUT –

Integration
1. from scipy.integrate import quad
2.
3. # Defining the function to integrate
4. def integrand(x):
5. return x**2
6.
7. # Performing the integration
8. integral, error = quad(integrand, 0, 1)
9. print("Integral:", integral)
10.
11.
This code demonstrates how to perform numerical integration using SciPy's integration module.

OUTPUT –
Statistics
1. from scipy import stats
2.
3. # Creating a dataset
4. data = np.array([1, 2, 2, 3, 3, 3, 4, 4, 4, 4, 5])
5.
6. # Computing descriptive statistics
7. mean = np.mean(data)
8. std_dev = np.std(data)
9. median = np.median(data)
10. print("Mean:", mean)
11. print("Standard Deviation:", std_dev)
12. print("Median:", median)
13.
14. # Performing a t-test
15. t_stat, p_value = stats.ttest_1samp(data, 3)
16. print("T-statistic:", t_stat)
17. print("P-value:", p_value)
18.

This code demonstrates how to compute descriptive statistics and perform a t-test using SciPy's
statistics module.

OUTPUT –

Signal Processing
1. from scipy import signal
2.
3. # Creating a signal
4. t = np.linspace(0, 1, 500)
5. signal = np.sin(2 * np.pi * 50 * t) + np.sin(2 * np.pi * 120 * t)
6.
7. # Applying a Butterworth filter
8. b, a = signal.butter(3, 0.05)
9. filtered_signal = signal.filtfilt(b, a, signal)
10.
11. import matplotlib.pyplot as plt
12. plt.plot(t, signal, label='Original Signal')
13. plt.plot(t, filtered_signal, label='Filtered Signal')
14. plt.xlabel('Time')
15. plt.ylabel('Amplitude')
16. plt.legend()
17. plt.show()
18.
19.

This code demonstrates how to create and filter a signal using SciPy's signal processing
module.

OUTPUT –

5. Scikit-learn is an open-source Python library for machine learning. It is built on NumPy,


SciPy, and Matplotlib and provides simple and efficient tools for data analysis and modeling.
Here are a few key components of scikit-learn:
1. Supervised Learning: Involves training a model on a labeled dataset, meaning the
input data is paired with the correct output. Examples include classification and
regression.
2. Unsupervised Learning: Involves training a model on data without labeled responses.
Examples include clustering and dimensionality reduction.
3. Model Selection and Evaluation: Tools for evaluating and comparing different models,
including cross-validation and various metrics.
4. Preprocessing: Functions for feature extraction, normalization, and data transformation
to prepare data for modeling.

Supervised Learning: Classification


1. from sklearn import datasets
2. from sklearn.model_selection import train_test_split
3. from sklearn.preprocessing import StandardScaler
4. from sklearn.svm import SVC
5. from sklearn.metrics import classification_report, accuracy_score
6.
7. # Load the dataset
8. iris = datasets.load_iris()
9. X = iris.data
10. y = iris.target
11.
12. # Split the dataset into training and testing sets
13. X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3,
random_state=42)
14.
15. # Standardize features
16. scaler = StandardScaler()
17. X_train = scaler.fit_transform(X_train)
18. X_test = scaler.transform(X_test)
19.
20. # Train a Support Vector Machine (SVM) classifier
21. clf = SVC(kernel='linear')
22. clf.fit(X_train, y_train)
23.
24. # Make predictions
25. y_pred = clf.predict(X_test)
26.
27. # Evaluate the model
28. print("Accuracy:", accuracy_score(y_test, y_pred))
29. print("Classification Report:\n", classification_report(y_test, y_pred))
30.
31.
This code demonstrates how to load the Iris dataset, preprocess the data, train a Support
Vector Machine (SVM) classifier, make predictions, and evaluate the model.

OUTPUT –
Unsupervised Learning: Clustering

1. from sklearn import datasets


2. from sklearn.cluster import KMeans
3. import matplotlib.pyplot as plt
4.
5. # Load the dataset
6. iris = datasets.load_iris()
7. X = iris.data
8.
9. # Train a KMeans clustering model
10. kmeans = KMeans(n_clusters=3, random_state=42)
11. kmeans.fit(X)
12.
13. # Plot the clusters
14. plt.scatter(X[:, 0], X[:, 1], c=kmeans.labels_, cmap='viridis')
15. plt.xlabel('Sepal length')
16. plt.ylabel('Sepal width')
17. plt.title('KMeans Clustering')
18. plt.show()
19.

This code demonstrates how to load the Iris dataset, train a KMeans clustering model, and
visualize the clusters.

OUTPUT –
Model Selection and Evaluation: Cross-Validation
1. from sklearn import datasets
2. from sklearn.model_selection import cross_val_score
3. from sklearn.ensemble import RandomForestClassifier
4.
5. # Load the dataset
6. iris = datasets.load_iris()
7. X = iris.data
8. y = iris.target
9.
10. # Train a Random Forest classifier with cross-validation
11. clf = RandomForestClassifier(n_estimators=100, random_state=42)
12. scores = cross_val_score(clf, X, y, cv=5)
13.
14. # Print the cross-validation scores
15. print("Cross-Validation Scores:", scores)
16. print("Mean Cross-Validation Score:", scores.mean())
17.

This code demonstrates how to use cross-validation to evaluate the performance of a Random
Forest classifier on the Iris dataset.

OUTPUT –
Preprocessing: Feature Scaling
1. from sklearn.preprocessing import MinMaxScaler
2. import numpy as np
3.
4. # Create a sample dataset
5. data = np.array([[1, 2], [2, 3], [3, 4], [4, 5]])
6.
7. # Apply Min-Max scaling
8. scaler = MinMaxScaler()
9. scaled_data = scaler.fit_transform(data)
10.
11. # Print the scaled data
12. print("Scaled Data:\n", scaled_data)
13. Preprocessing: Feature Scaling
14.

This code demonstrates how to apply Min-Max scaling to a sample dataset to normalize the
features.

OUTPUT –

You might also like