0% found this document useful (0 votes)
40 views43 pages

Project Report

Uploaded by

Prajwal gh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
40 views43 pages

Project Report

Uploaded by

Prajwal gh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 43

Project Report: 2023-2024 Gesture Controlled Audio System

Chapter 1

INTRODUCTION
In the modern era, human-computer interaction (HCI) has evolved significantly, extending
beyond traditional input devices like keyboards and mice to more natural and intuitive methods.
Gesture recognition technology stands at the forefront of this transformation, offering a
seamless and immersive way to interact with electronic systems. This project report presents
the development of a "Gesture Controlled Audio System," an innovative approach that
leverages gesture recognition to manipulate audio functionalities without physical contact. In
today's rapidly evolving technological landscape, the interaction between humans and machines
is becoming increasingly seamless and intuitive. The traditional reliance on physical input
devices such as keyboards, mice, and touchscreens is giving way to more sophisticated and
natural methods of control, such as voice commands and gesture recognition. This project report
introduces the "Gesture Controlled Audio System," a cutting-edge solution that leverages hand
gestures to manage audio playback and control. This system aims to provide an intuitive,
hygienic, and accessible interface, enhancing the overall user experience and opening new
possibilities for human-computer interaction.
Gesture recognition technology has gained significant traction in recent years, driven
by advancements in computer vision, machine learning, and sensor technology. The ability to
control devices through simple hand movements is not only futuristic but also practical in
various applications, from gaming and virtual reality to automotive interfaces and home
automation. The motivation behind developing a gesture-controlled audio system stems from
the desire to create a more natural and engaging way to interact with audio devices. By utilizing
gestures, users can control playback, adjust volume, and navigate tracks without touching any
physical buttons or screens. This touchless interaction is particularly advantageous in
environments where hygiene is a concern, such as public spaces or during the COVID-19
pandemic, and for individuals with physical disabilities who may find traditional controls
challenging to use. Gesture-controlled audio systems revolutionize the way we interact with
audio devices by enabling hands-free control through intuitive gestures. Leveraging cutting-

Dept. ECE 1 Sapthagiri College Of Engineering-57,B’luru-57


Project Report: 2023-2024 Gesture Controlled Audio System

edge gesture recognition technology, these systems interpret users' hand movements or body
gestures in real-time, translating them into commands for playback, volume adjustment, track
navigation, and more. By eliminating the need for physical buttons or touchscreens, gesture-
controlled audio systems offer enhanced convenience, accessibility, and safety in various
scenarios, including driving, exercising, or multitasking. Moreover, they represent a
convergence of human-computer interaction and artificial intelligence, providing more natural
and intuitive interfaces for users. This technology holds promise across diverse industries, from
automotive and consumer electronics to healthcare and entertainment, as it continues to evolve
and integrate with other emerging technologies. In this report, we explore the principles,
implementation, applications, and future prospects of gesture-controlled audio systems,
unveiling their potential to transform the audio experience and redefine human-machine
interaction.

1.1 Objective of the proposed system

The gesture-controlled audio system has two main purposes:

• To design and integrate a robust circuitry system that combines flex sensors, an MP3
module, an audio amplifier, and a microphone with an Arduino microcontroller.

• To develop a comprehensive software program that governs sensor inputs, audio playback,
and volume control, ensuring seamless integration and functionality.

1.2 Problem statement


The Gesture-Controlled Audio System project aims to revolutionize the way users interact with
audio devices by providing a hands-free and intuitive control interface. Traditional methods of
audio control often involve physical buttons or remote controls, which can be cumbersome and
limiting. This project addresses the need for a more natural and immersive interaction
experience by developing a system that recognizes hand gestures to control audio playback.
Challenges include developing robust gesture recognition algorithms capable of accurately
interpreting a variety of gestures, ensuring real-time processing to minimize latency, designing
an intuitive user interface for seamless interaction, integrating hardware components

Dept. ECE 2 Sapthagiri College Of Engineering-57,B’luru-57


Project Report: 2023-2024 Gesture Controlled Audio System

effectively, and providing comprehensive documentation for replication. By overcoming these


challenges, the Gesture-Controlled Audio System will offer users a novel and engaging way to
interact with audio devices in various environments, such as home entertainment systems,
public events, and interactive installations.

1.3 Motivation of the project


The motivation behind the Gesture-Controlled Audio System project stems from the desire to
explore innovative ways of interacting with audio devices. Traditional methods of audio
control, such as buttons or voice commands, can be limiting or cumbersome in certain
situations. By harnessing the power of gesture recognition technology, this project seeks to
offer users a more intuitive and immersive audio experience. Hand gestures are a natural and
universal form of communication, making them an ideal interface for controlling audio
playback. Additionally, incorporating wireless communication and microcontroller technology
adds versatility and mobility to the system, allowing users to interact with their audio devices
from a distance. Ultimately, the goal of this project is to push the boundaries of audio control
systems, providing users with a seamless and enjoyable way to interact with their favorite music
and audio content.

Dept. ECE 3 Sapthagiri College Of Engineering-57,B’luru-57


Project Report: 2023-2024 Gesture Controlled Audio System

Chapter 2

LITERATURE SURVEY
M. Alagu sundaram et al.,[1] proposed a model that provides an in-depth analysis of
various hand gesture recognition techniques, including vision-based methods using cameras,
sensor-based approaches using wearable devices, and hybrid methods combining multiple
sensors. It discusses their applications across diverse fields such as human-computer
interaction, virtual reality, and robotics.
J. Li et al. [2] proposed a model that presents a real-time hand gesture recognition
system based on convolutional neural networks (CNNs). The system achieves high accuracy in
recognizing dynamic hand gestures captured by depth sensors, demonstrating its potential for
applications in human-computer interaction.
K. Saroha et al. [3] proposed a model that explores the use of machine learning
techniques, including deep learning algorithms, for gesture recognition applications. It
discusses various datasets, algorithms, and evaluation metrics commonly used in gesture
recognition research, highlighting recent advancements and challenges in the field.
S. Velázquez et al. [4] proposed a model that focuses on gesture recognition systems
designed for ambient assisted living (AAL) environments to support elderly and disabled
individuals. It discusses the importance of non-intrusive interaction techniques and presents
state-of-the-art approaches for gesture recognition in AAL scenarios.
A. Gupta et al. [5] proposed a model that provides an overview of gesture-controlled
music playback systems, including both commercial products and research prototypes. It
discusses various technologies and methods used for gesture recognition and audio control,
highlighting their applications in entertainment and multimedia environments.
Georgi et al. [6] proposed a model that the combination of IMU and EMG sensors can
capture both motion data and muscle activity, providing a robust dataset for gesture
classification. The IMU sensors, comprising accelerometers and gyroscopes, track the
orientation and movement of the hand. Concurrently, the EMG sensors monitor muscle

Dept. ECE 4 Sapthagiri College Of Engineering-57,B’luru-57


Project Report: 2023-2024 Gesture Controlled Audio System

contractions, offering insights into the underlying muscle activities driving these movements.
This dual-sensing approach addresses the limitations of using a single type of sensor, such as
IMU's susceptibility to drift and EMG's sensitivity to noise.
Ariyanto et al. [7] proposed a model that focus on the use of EMG sensors to capture
the electrical activity produced by muscle contractions during finger movements. The collected
EMG data serve as input to an ANN, which is trained to recognize specific movement patterns.
This method addresses common challenges in EMG-based recognition systems, such as signal
variability and noise, by utilizing the ANN's ability to learn complex patterns and generalize
across different data sets.
The authors conducted extensive experiments to evaluate the performance of their
proposed system. They report high accuracy rates in recognizing various finger movement
patterns, demonstrating the effectiveness of using ANNs for EMG signal classification. This
approach shows promise for applications in prosthetics, where accurate interpretation of muscle
signals is critical for the control of artificial limbs.
Jorgensen et al. [8] proposed a model that employ neural network algorithms to process
and interpret the EMG and EPG signals. Their experiments demonstrate that it is possible to
achieve accurate speech recognition using these sub-auditory signals. The neural networks are
trained to identify specific speech patterns, allowing the system to recognize words and phrases
from the muscle and contact data alone.
This research has significant implications for the development of silent communication
systems and assistive technologies. For instance, it can be used in military or covert operations
where silent communication is crucial, or in assistive devices for individuals who cannot speak
audibly. The ability to recognize speech without sound opens new possibilities for human-
computer interaction and accessibility.
Zhang et al. [9] proposed a model that develop an algorithm that processes and fuses
data from the accelerometer and EMG sensors. Their system includes preprocessing steps to
filter noise and normalize the sensor signals, followed by feature extraction to capture the
essential characteristics of each gesture. The extracted features are then fed into a machine
learning classifier to identify and categorize the gestures.

Dept. ECE 5 Sapthagiri College Of Engineering-57,B’luru-57


Project Report: 2023-2024 Gesture Controlled Audio System

Experimental results presented in the paper demonstrate the effectiveness of the


proposed framework. The authors report high recognition rates for a variety of hand gestures,
highlighting the system's potential for real-world applications. This framework is particularly
relevant for human-computer interaction, where accurate and responsive gesture recognition is
critical for developing intuitive and natural user interfaces.
Dixit S K et al. [10] proposed a model that present their experimental setup and
methodology, which involves integrating the flex sensors and electronic compasses with the
robot's control system. They describe the wireless communication protocols used to transmit
gesture commands from the sensors to the robot, enabling seamless interaction between the user
and the robotic system.
Through their implementation, Dixit and Shingi demonstrate the feasibility and
effectiveness of hand gesture-based control for material handling robots. They report successful
automation of various tasks, such as object manipulation and navigation, showcasing the
potential of their approach for streamlining industrial processes and improving efficiency.

Dept. ECE 6 Sapthagiri College Of Engineering-57,B’luru-57


Project Report: 2023-2024 Gesture Controlled Audio System

Chapter 3

PROPOSED SYSTEM
The primary focus of this project is to design and implement a gesture-controlled audio system
capable of generating distinct sounds in response to user gestures. The system comprises ten
flex sensors, an Arduino microcontroller, an MP3-TF-16P MP3 SD Card Module, a PAM84403
audio amplifier, an electret microphone (MIC), and a 6W speaker. Each flex sensor is connected
to the Arduino, and when a user bends a sensor, a specific pre-recorded sound stored on the
MP3 module is played through the speaker. In addition to gesture-based control, the system
incorporates a microphone to detect loud sounds. When the microphone detects a sound above
a predefined threshold, it triggers a specific audio file to play through the speaker. The audio
amplifier enhances the audio output, ensuring a clear and audible sound experience for the user.
The MP3-TF-16P module serves as the central audio source for the system, storing a variety of
audio files corresponding to different gestures and loud sounds. The Arduino microcontroller
orchestrates the playback of these audio files based on the sensor inputs and microphone
readings, creating a dynamic and interactive audio environment.

Objectives to be fulfilled:

• To design and integrate a robust circuitry system that combines flex sensors, an MP3
module, an audio amplifier, and a microphone with an Arduino microcontroller.

• To develop a comprehensive software program that governs sensor inputs, audio playback,
and volume control, ensuring seamless integration and functionality.

• To implement advanced gesture recognition algorithms that accurately detect, interpret,


and respond to user gestures captured by the flex sensors, facilitating precise and
responsive control.

• To test, evaluate, and optimize the system to ensure reliability, accuracy, and user-
friendliness, validating its performance across diverse scenarios and applications.

Dept. ECE 7 Sapthagiri College Of Engineering-57,B’luru-57


Project Report: 2023-2024 Gesture Controlled Audio System

Figure 3.1: Block diagram of working of the system

Dept. ECE 8 Sapthagiri College Of Engineering-57,B’luru-57


Project Report: 2023-2024 Gesture Controlled Audio System

Chapter 4

HARDWARE AND SOFTWARE REQUIREMENTS


4.1 Hardware requirements
4.1.1 ESP 32 Flex sensors

A flex sensor is a type of sensor that acts as a variable resistor, changing its resistance based on
the amount of bend or flex applied to it. Typically made from a flexible substrate coated with a
conductive material, the sensor exhibits an increase in resistance as it bends. This change in
resistance can be measured and translated into a quantifiable signal, allowing the detection and
measurement of bending angles or motion.

Flex sensors are widely used in various applications due to their versatility and ease of
integration. In wearable technology, flex sensors are often embedded in gloves, sleeves, or other
garments to capture precise movements of the body. For instance, in smart gloves, they can
track the bending of fingers, enabling applications in virtual reality (VR) and augmented reality
(AR) where hand gestures control virtual objects or interfaces. This application is particularly
valuable in gaming, providing a more immersive and interactive experience by allowing users
to interact naturally with virtual environments.

In the field of robotics, flex sensors are crucial for enhancing the dexterity and responsiveness
of robotic limbs. By providing real-time feedback on the position and movement of joints, these
sensors enable robots to perform tasks with greater precision and adaptability. This capability
is essential in industries where robots handle delicate or complex operations, such as in medical
surgery or automated manufacturing.

Flex sensors also play a significant role in assistive technology for individuals with disabilities.
They can be used to develop gesture-controlled devices, such as prosthetics or communication
aids, that respond to the user's movements, thereby improving accessibility and quality of life.

Dept. ECE 9 Sapthagiri College Of Engineering-57,B’luru-57


Project Report: 2023-2024 Gesture Controlled Audio System

Figure 4.1 Flex Sensor

4.1.2 Arduino Microcontroller

The Arduino microcontroller is a versatile and user-friendly open-source platform that


combines hardware and software to create a wide array of electronic projects. At its core,
Arduino consists of a microcontroller board—such as the popular Arduino Uno—that can be
programmed using the Arduino Integrated Development Environment (IDE) to perform various
tasks by interfacing with sensors, actuators, and other components.
One of the key strengths of Arduino is its accessibility, making it an ideal tool for both beginners
and experienced developers. The hardware is designed to be simple and easy to use, with digital
and analog input/output pins that can be connected to various modules. The microcontroller on
the board, typically an AVR or ARM-based chip, executes the instructions written in the
Arduino programming language, which is based on C/C++.
The Arduino IDE further enhances accessibility by providing a straightforward environment
for writing, debugging, and uploading code to the microcontroller. It includes a vast library of
pre-written code for a wide range of functions, enabling users to quickly implement complex
tasks without starting from scratch. Additionally, the vibrant Arduino community has
contributed countless tutorials, projects, and libraries, offering extensive support and resources.
Arduino's applications span numerous fields. In education, it serves as a valuable tool for
teaching electronics and programming. Hobbyists use Arduino to build projects such as home

Dept. ECE 10 Sapthagiri College Of Engineering-57,B’luru-57


Project Report: 2023-2024 Gesture Controlled Audio System

automation systems, wearable devices, and interactive art installations. In professional contexts,
Arduino is utilized for rapid prototyping, allowing engineers and designers to test and iterate
on their concepts quickly and cost-effectively.
Furthermore, the platform's open-source nature encourages innovation and customization.
Users can modify existing designs or create their own hardware extensions, fostering a culture
of sharing and collaboration. Overall, Arduino microcontrollers have revolutionized the way
people engage with electronics, democratizing technology and empowering individuals to turn
their ideas into reality.

Figure 4.2: Arduino uno R3

4.1.3 MP3-TF-16P MP3 SD Card Module


The MP3-TF-16P MP3 SD Card Module is a compact and versatile audio playback device that
supports microSD cards for storing MP3 files. When integrated with a hand gesture recognition
system, this module enables touchless control of audio playback, offering a modern and
intuitive user experience.
Using sensors like cameras or infrared detectors, the hand gesture system interprets user
movements and translates them into commands such as play, pause, skip tracks, or adjust
volume. These commands are then sent to the MP3-TF-16P module, which executes the
corresponding audio function.

Dept. ECE 11 Sapthagiri College Of Engineering-57,B’luru-57


Project Report: 2023-2024 Gesture Controlled Audio System

This setup is particularly beneficial in applications where hygiene is paramount or where users
have physical disabilities that make traditional controls challenging. The combination of
gesture recognition technology with the MP3-TF-16P module results in an innovative audio
system that enhances accessibility, convenience, and user interaction.

Figure 4.3: MP3-TF-16P MP3 SD Card Module

4.1.4 PAM84403 Audio Amplifier


The PAM84403 is a high-efficiency Class-D audio power amplifier designed for various audio
applications. It is capable of delivering 6 watts of continuous output power per channel into a
4Ω load with low distortion and high efficiency, making it suitable for battery-powered devices
like portable speakers and handheld audio systems. This amplifier operates over a wide input
voltage range from 2.5V to 5.5V, providing flexibility in power supply design.
One of the standout features of the PAM84403 is its integrated digital volume control, which
simplifies the design by eliminating the need for external components. The device also includes
built-in protection mechanisms, such as over-temperature protection, under-voltage lockout,
and short-circuit protection, ensuring reliable performance under various operating conditions.
The PAM84403 utilizes a filter-less modulation scheme, reducing external component count
and board space. Its low idle current and high efficiency (up to 90%) contribute to extended

Dept. ECE 12 Sapthagiri College Of Engineering-57,B’luru-57


Project Report: 2023-2024 Gesture Controlled Audio System

battery life in portable applications. Additionally, the amplifier's differential input architecture
enhances noise immunity, providing clear and high-quality audio output.
In summary, the PAM84403 is an ideal solution for compact and energy-efficient audio
systems, offering robust performance, minimal external components, and integrated protection
features.

Figure 4.4: PAM84403 Audio Amplifier

4.1.5 Jumper Wires


A jump wire shown in fig 4.5 (also known as jumper, jumper wire, jumper cable, DuPont wire,
or DuPont cable) is an electrical wire or group of them in a cable with a connector or pin at
each end (or sometimes without them – simply "tinned"), which is normally used to interconnect
the components of a breadboard or other prototype or test circuit, internally or with other
equipment or components, without soldering. Individual jump wires are fitted by inserting their
"end connectors" into the slots provided in a breadboard, the header connector of a circuit board,
or a piece of test equipment.

Dept. ECE 13 Sapthagiri College Of Engineering-57,B’luru-57


Project Report: 2023-2024 Gesture Controlled Audio System

Figure 4.5: Jumper wires

4.1.6 Electret Microphone (MIC)

An electret microphone (MIC) is a type of condenser microphone widely used in various audio
applications due to its small size, low cost, and good performance. The core of an electret
microphone is a diaphragm coated with an electret material that retains a permanent electric
charge. This diaphragm is placed close to a metal backplate, forming a capacitor.

When sound waves strike the diaphragm, it vibrates, causing variations in the capacitance
between the diaphragm and the backplate. These variations result in changes in the electric
field, generating a corresponding electrical signal that represents the sound.

Electret microphones are valued for their simplicity and reliability. They require minimal
external components, typically just a bias resistor and a power supply, making them easy to
integrate into electronic circuits. They are also highly sensitive and have a relatively flat
frequency response, making them suitable for capturing a wide range of audio frequencies with
clarity.

Common applications for electret microphones include smartphones, hearing aids, laptops,
voice recorders, and other consumer electronics. They are also used in professional audio

Dept. ECE 14 Sapthagiri College Of Engineering-57,B’luru-57


Project Report: 2023-2024 Gesture Controlled Audio System

equipment, such as lavalier microphones and headsets, due to their ability to deliver clear sound
in a compact form factor.

Overall, electret microphones are an essential component in modern audio capture, offering a
balance of performance, size, and cost.

Figure 4.6: Electret Microphone (MIC)

4.1.7 6W Speaker

• The speaker produces the audio output, allowing users to hear the played audio files
and responses generated by the system.
• Speakers come in various sizes and power ratings; a 6W speaker is chosen to provide
sufficient audio volume and clarity for the intended application.

4.1.8 Breadboard
A breadboard is a fundamental tool in electronics for prototyping and testing circuits without
soldering. It consists of a rectangular plastic board with a grid of interconnected holes where
electronic components and wires can be inserted. The internal connections of the breadboard
are typically organized in rows and columns, making it easy to create and modify circuits
quickly.
The breadboard is divided into two main sections: the terminal strips and the bus strips.
Terminal strips, located in the center, are used for placing and connecting electronic
components like resistors, capacitors, and integrated circuits. Each row in a terminal strip is

Dept. ECE 15 Sapthagiri College Of Engineering-57,B’luru-57


Project Report: 2023-2024 Gesture Controlled Audio System

electrically connected, allowing components to share connections. The bus strips, usually
positioned along the sides, are used for power distribution. They consist of long columns for
the positive and negative power rails, providing a convenient way to distribute power to the
entire circuit.
Breadboards are invaluable for experimenting with and troubleshooting circuit designs,
allowing for easy adjustments and replacements of components. They are reusable, which
makes them cost-effective for iterative development. Additionally, they are widely used in
educational settings to teach electronics and circuit design principles, as they offer a hands-on
way to learn about electrical connections and circuitry without permanent commitment.

Figure 4.7: LCD display

4.2 Software Requirements


4.2.1 Arduino IDE
Arduino is a prototype platform (open-source) based on an easy-to-use hardware and software.
It consists of a circuit board, which can be programed (referred to as a microcontroller) and a
ready-made software called Arduino IDE (Integrated Development Environment), which is
used to write and upload the computer code to the physical board.

Dept. ECE 16 Sapthagiri College Of Engineering-57,B’luru-57


Project Report: 2023-2024 Gesture Controlled Audio System

Arduino provides a standard form factor that breaks the functions of the micro-controller into
a more accessible package.

A program for Arduino may be written in any programming language for a compiler that
produces binary machine code for the target processor. Atmel provides a development
environment for their microcontrollers, AVR Studio and the newer Atmel Studio.

The Arduino project provides the Arduino integrated development environment (IDE), which
is a cross-platform application written in the programming language Java. It originated from the
IDE for the languages Processing and Wiring. It includes a code editor with features such as
text cutting and pasting, searching and replacing text, automatic indenting, brace matching, and
syntax highlighting, and provides simple one-click mechanisms to compile and upload
programs to an Arduino board. It also contains a message area, a text console, a toolbar with
buttons for common functions and a hierarchy of operation menus.

A program written with the IDE for Arduino is called a sketch. Sketches are saved on the
development computer as text files with the file extension Arduino Software (IDE) pre-1.0
saved sketches with the extension. pde.
The Arduino IDE supports the languages C and C++ using special rules of code structuring.
The Arduino IDE supplies a software library from the Wiring project, which provides many
common input and output procedures. User-written code only requires two basic functions, for
starting the sketch and the main program loop, that are compiled and linked with a program
stub main() into an executable cyclic executive program with the GNU toolchain, also included
with the IDE distribution.
A minimal Arduino C/C++ sketch, as seen by the Arduino IDE programmer, consist of only
two functions:
• setup (): This function is called once when a sketch starts after power-up or reset. It is
used to initialize variables, input and output pin modes, and other libraries needed in the
sketch.
• loop (): After setup () has been called, function loop () is executed repeatedly in the main
program. It controls the board until the board is powered off or is reset.

Dept. ECE 17 Sapthagiri College Of Engineering-57,B’luru-57


Project Report: 2023-2024 Gesture Controlled Audio System

Arduino Installation: After learning about the main parts of the Arduino UNO board, we are
ready to learn how to set up the Arduino IDE. Once we learn this, we will be ready to upload
our program on the Arduino board.

In this section, we will learn in easy steps, how to set up the Arduino IDE on our computer and
prepare the board to receive the program via USB cable.

Step 1 − First you must have your Arduino board and a USB cable. In case you use Arduino
UNO, Arduino Duemilanove, Nano, Arduino Mega 2560, or Diecimila, you will need a
standard USB cable (A plug to B plug), the kind you would connect to a USB printer.

In case you use Arduino Nano, you will need an A to Mini-B cable instead.

Step 2 − Download Arduino IDE Software.

You can get different versions of Arduino IDE from the download page on the Arduino Official
website. You must select your software, which is compatible with your operating system
(Windows, IOS, or Linux). After your file download is complete, unzip the file as shown in
figure 4.8

Figure 4.8: Process to unzip

Step 3 − Power up your board.

Dept. ECE 18 Sapthagiri College Of Engineering-57,B’luru-57


Project Report: 2023-2024 Gesture Controlled Audio System

The Arduino Uno, Mega, Duemilanove and Arduino Nano automatically draw power from
either, the USB connection to the computer or an external power supply. If you are using an
ArduinoDiecimila, you have to make sure that the board is configured to draw power from the
USB connection. The power source is selected with a jumper, a small piece of plastic that fits
onto two of the three pins between the USB and power jacks. Check that it is on the two pins
closest to the USB port.

Connect the Arduino board to your computer using the USB cable. The green power LED
(labeled PWR) should glow.

Step 4 − Launch Arduino IDE.

After your Arduino IDE software is downloaded, you need to unzip the folder. Inside the folder,
you can find the application icon with an infinity label (application.exe). Double-click the icon
to start the IDE.

Step 5 − Open your first project.

Once the software starts, you have two options −

• Create a new project.

• Open an existing project example.

To create a new project, select File → New.

To open an existing project example, select File → Example → Basics → Blink.

Here, we are selecting just one of the examples with the name Blink. It turns the LED on and
off with some time delay. You can select any other example from the list.

Step 6 − Select your Arduino board.

To avoid any error while uploading your program to the board, you must select the correct
Arduino board name, which matches with the board connected to your computer.

Go to Tools → Board and select your board

Dept. ECE 19 Sapthagiri College Of Engineering-57,B’luru-57


Project Report: 2023-2024 Gesture Controlled Audio System

Here, we have selected Arduino Uno board according to our tutorial, but you must select the
name matching the board that you are using.

Step 7 − Select your serial port.

Select the serial device of the Arduino board. Go to Tools → Serial Port menu. This is likely
to be COM3 or higher (COM1 and COM2 are usually reserved for hardware serial ports). To
find out, you can disconnect your Arduino board and re-open the menu, the entry that disappears
should be of the Arduino board. Reconnect the board and select that serial port.

Step 8 − Upload the program to your board.

Before explaining how we can upload our program to the board, we must demonstrate the
function of each symbol appearing in the Arduino IDE toolbar shown in fig 4.9.

Figure 4.9: IDE toolbar

A − Used to check if there is any compilation error.

B − Used to upload a program to the Arduino board.

C − Shortcut used to create a new sketch.

D − Used to directly open one of the example sketches.

E − Used to save your sketch.

F − Serial monitor used to receive serial data from the board and send the serial data to the
board.

Dept. ECE 20 Sapthagiri College Of Engineering-57,B’luru-57


Project Report: 2023-2024 Gesture Controlled Audio System

Now, simply click the "Upload" button in the environment. Wait a few seconds; you will see
the RX and TX LEDs on the board, flashing. If the upload is successful, the message "Done
uploading" will appear in the status bar.

Note − If you have an Arduino Mini, NG, or other board, you need to press the reset button
physically on the board, immediately before clicking the upload button on the Arduino
Software.

4.2.2 Embedded C

Embedded C is one of the most popular and most commonly used Programming Languages in
the development of Embedded Systems.

Embedded C is perhaps the most popular languages among Embedded Programmers for
programming Embedded Systems. There are many popular programming languages like
Assembly, BASIC, C++ etc. that are often used for developing Embedded Systems but
Embedded C remains popular due to its efficiency, less development time and portability.

An Embedded System can be best described as a system which has both the hardware and
software and is designed to do a specific task. A good example for an Embedded System, which
many households have, is a Washing Machine.

Programming Embedded Systems:

As mentioned earlier, Embedded Systems consists of both Hardware and Software. If we


consider a simple Embedded System, the main Hardware Module is the Processor. The
Processor is the heart of the Embedded System and it can be anything like a Microprocessor,
Microcontroller, DSP, CPLD (Complex Programmable Logic Device) and FPGA (Field
Programmable Gated Array).

All these devices have one thing in common: they are programmable i.e. we can write a program
(which is the software part of the Embedded System) to define how the device actually works.
Embedded Software or Program allow Hardware to monitor external events (Inputs) and control

Dept. ECE 21 Sapthagiri College Of Engineering-57,B’luru-57


Project Report: 2023-2024 Gesture Controlled Audio System

external devices (Outputs) accordingly. During this process, the program for an Embedded
System may have to directly manipulate the internal architecture of the Embedded Hardware
(usually the processor) such as Timers, Serial Communications Interface, Interrupt Handling,
and I/O Ports etc.

From the above statement, it is clear that the Software part of an Embedded System is equally
important to the Hardware part. There is no point in having advanced Hardware Components
with poorly written programs (Software). There are many programming languages that are used
for Embedded Systems like Assembly (low-level Programming Language), C, C++, JAVA
(high-level programming languages), Visual Basic, JAVA Script (Application level
Programming Languages), etc. In the process of making a better embedded system, the
programming of the system plays a vital role and hence, the selection of the Programming
Language is very important.

Factors for Selecting the Programming Language:

The following are few factors that are to be considered while selecting the Programming
Language for the development of Embedded Systems.

• Size: The memory that the program occupies is very important as Embedded Processors
like Microcontrollers have a very limited amount of ROM.
• Speed: The programs must be very fast i.e. they must run as fast as possible. The hardware
should not be slowed down due to a slow running software.
• Portability: The same program can be compiled for different processors.
• Ease of Implementation
• Ease of Maintenance
• Readability

Earlier Embedded Systems were developed mainly using Assembly Language. Even though
Assembly Language is closest to the actual machine code instructions, the lack of portability

Dept. ECE 22 Sapthagiri College Of Engineering-57,B’luru-57


Project Report: 2023-2024 Gesture Controlled Audio System

and high amount of resources spent on developing the code, made the Assembly Language
difficult to work with.

Difference between C and Embedded C

There is actually not much difference between C and Embedded C apart from few extensions
and the operating environment. Both C and Embedded C are ISO Standards that have almost
same syntax, datatypes, functions, etc.

Embedded C is basically an extension to the Standard C Programming Language with


additional features like Addressing I/O, multiple memory addressing and fixed-point
arithmetic, etc.

C Programming Language is generally used for developing desktop applications whereas


Embedded C is used in the development of Microcontroller based applications.

Basic Structure of an Embedded C Program

The next thing to understand in the Basics of Embedded C Program is the basic structure or
Template of Embedded C Program. This will help us in understanding how an Embedded C
Program is written.

The following part shows the basic structure of an Embedded C Program.

• Multiline Comments is Denoted using /*……*/

• Single Line Comments is Denoted using //

• Preprocessor Directives are #include<…> or #define

• Global Variables are accessible anywhere in the program

Dept. ECE 23 Sapthagiri College Of Engineering-57,B’luru-57


Project Report: 2023-2024 Gesture Controlled Audio System

• Function Declarations is Declaring Function

• Main Function: Main Function, execution begins here


{
Local Variables are variables confined to main function
Function Calls is to call other Functions
Infinite Loop is like while(1) or for(;;)
Statements . . . . .
….
….
}

• Function Definitions are for defining the functions


{
Local Variables: Local Variables confined to this Function
Statements . . ..
}

4.2.3 Python IDE

Python is an easy to learn, powerful programming language. It has efficient high-level data
structures and a simple but effective approach to object-oriented programming. Python’s
elegant syntax and dynamic typing, together with its interpreted nature, make it an ideal
language for scripting and rapid application development in many areas on most platforms. The
Python interpreter is easily extended with new functions and data types implemented in C or
C++ (or other languages callable from C). Python is also suitable as an extension language for
customizable applications.
Python 3.7 is a suitable version for machine learning tasks. Many popular machine learning
libraries such as TensorFlow, PyTorch, scikit-learn, and Keras support Python 3.7. You can
utilize these libraries to build, train, and deploy machine learning models effectively.
Here's how you can set it up for machine learning:

Dept. ECE 24 Sapthagiri College Of Engineering-57,B’luru-57


Project Report: 2023-2024 Gesture Controlled Audio System

Install Python 3.7: Make sure you have Python 3.7 installed on your system. You can download
it from the official Python website.
Open Python IDLE: Once Python 3.7 is installed, you can open Python IDLE by searching for
it in your operating system's application launcher or by running idle3 or idle command in the
terminal/command prompt.
Install machine learning libraries: You'll need to install the necessary machine learning libraries
to work with in Python IDLE. You can install libraries like TensorFlow, PyTorch, scikit-learn,
etc., using pip.
Write and run your code: You can now start writing Python code for machine learning tasks
directly in Python IDLE. You can create a new Python file by selecting "File" > "New File"
from the menu, or simply start typing in the interactive shell. You can then run your code by
selecting "Run" > "Run Module" from the menu or by pressing F5.
Debugging: Python IDLE provides basic debugging capabilities, such as setting breakpoints,
stepping through code, inspecting variables, etc. You can utilize these features to debug your
machine learning code as needed.
While Python IDLE is a simple and easy-to-use IDE, it may lack some of the advanced features
and integrations that are available in other IDEs specifically designed for machine learning
development.

Dept. ECE 25 Sapthagiri College Of Engineering-57,B’luru-57


Project Report: 2023-2024 Gesture Controlled Audio System

Chapter 5

IMPLEMENTATION
The proposed work involves four primary sub-sections i.e., Sensor Interfacing, Data Collection
and Pre-processing, Feature Extraction and Selection, and Gesture Recognition.
The total work-flow is illustrated in Figure 5.1.

Figure 5.1: Work Flow Of Model

5.1 Sensor Interfacing

Flex sensor (SEN - 08606) is the primary component used in this work. The hardware prototype
with flex sensors mounted on fingers. Flex sensors’ terminal resistance changes when they are
bent, and this helps in detecting the motion in a specific part of the body. The Flex sensor does
not contain polarized terminals. So there are no positive and negative terminals. In general, the
pin number P1 is connected to positive of power source and P2 is connected to ground, as
illustrated in Figure 5.2. With increase in bent/Flex in the Flex sensor, the resistance increases.
Figure 5.2 shows the connection followed in order to Interface the Arduino. After properly
connecting the sensor with Arduino, the later is connected to PC. With the help of Arduino IDE,
a specific designed code is exported into the Arduino based on which when there is a slight
change in the bend of Flex sensor, there is a change observed in the readings obtained as output
from the Arduino.

Dept. ECE 26 Sapthagiri College Of Engineering-57,B’luru-57


Project Report: 2023-2024 Gesture Controlled Audio System

Figure 5.2: Flex Sensor Circuit Diagram

Features and specifications of the used flex sensors are as follows:

• Flat Resistance: 10K Ohms


• Power Rating: 0.50 W
• Description: Spark Fun Flex Sensor
• Resistance Tolerance: ±30
• Length: 4.5 inches
• Bend Resistance Range: 60K to 110K Ohms

5.2 Data Collection and Pre-processing


After the above fundamental work, we proceed to next level that is taking the data from actual
conditions of the flex sensors, i.e., when they are attached on fingers and the readings actually
measure the bending of each finger they are attached to. The setup was prepared in the following
steps:
• In our case, white surgical gloves were worn on the hand. It is done to check the effect of
perspiration on the Flex sensors.
• The Flex sensors were then tied on index finger and middle finger of right hand. Elastic
bands were used to keep the sensors in place.

Dept. ECE 27 Sapthagiri College Of Engineering-57,B’luru-57


Project Report: 2023-2024 Gesture Controlled Audio System

• Long copper wires were used to connect the flex sensors to Arduino and resistor as per the
circuit diagram.
When the setup is ready the connection was checked again to ensure no loose connection and
the work then proceeded to the next stage, i.e., data preprocessing.
Data Pre-processing: In this phase, the collected data were categorized on four basic class
labels, i.e., Class label 0, Class label 1, Class label 2 and Class label 3. The Gestures signified
by the Class labels are described here-under:

• Class label 0: This class label indicated the hand gesture in which both of the index finger
and middle finger have curled inside.
• Class label 1: This class label indicated the hand gesture in which both of the index finger
and middle finger are fully stretched and kept in a manner that represented ‘V’ symbol.
• Class label 2: For this class, the middle finger is curled inside while the index finger
remains as it is in the case before and the readings are noted.
• Class label 3: This is the last class label we are considering. The hand gesture in this class
label is derived from the hand gesture considered in class label 1. The difference is that the
index finger is curled inside in this case and all the other finger position remaining same.
The data recorded from each sensor is an integer value. In the interval of 0.25 seconds, a new
value was obtained based on the change done in bending of the sensor. The two data that is
obtained is labelled as x and y in our dataset. The x and y represented the value obtained from
the flex sensor tied to index finger and middle finger, respectively. Before the use of the above
dataset, shuffling was carefully done by using random library of python. The shuffling ensured
sufficient availability of various label data to the machine learning model for training, testing
and validation purposes. Some out-layer physical noises were also removed from the dataset.

5.3 Feature Extraction and Selection


For better training and prediction, some extra features were extracted from the dataset. The
extra features can be categorized as follows:
• Squares: In this kind of feature extraction, there are two features obtained, i.e., square(x)
and square(y), where square(x) = x2 and square(y) = y2

Dept. ECE 28 Sapthagiri College Of Engineering-57,B’luru-57


Project Report: 2023-2024 Gesture Controlled Audio System

• First Difference: This kind of feature ex-traction takes into account the values of previous
row as well and finds the deviation occurred from it, i.e., Xfd and Yfd. The absolute value
is considered. Therefore, Xfd = abs (xi–xi−1) and Yfd = abs (yi–yi−1).
• Second Difference: This kind of feature extraction takes into account the values of previous
row and the upcoming row and finds the mean deviation of the three, i.e., Xsd and Ysd. The
absolute value is considered. Therefore, Xsd = abs (xi+1–2∗xi+xi−1) and Ysd = abs (yi+1–2∗yi
+yi−1)

A total of 6 features were extracted from the pre-existing data and were then used for Machine
Learning of the models. It is observed that the models were able to perform better when the
extra features were present. The data included the values of x and y when the gestures were
prominent and the data that is recorded while the transition from one gesture to another or when
the gesture is ambiguous is ignored.

5.4 Gesture Recognition


In order to interact with different electronic devices, we can give hand gestures as input. This
way of taking the input can bring a vast change in each one of our life. For the people who are
incapable of speaking can utilise the concept of converting hand gestures to speech which will
be done by a machine at real time. Very soon the application can be seen in various fields. This
greatly inspired us to come to our final and main part of our work, i.e, recognition of hand
gestures. All the processed data were trained, validated and tested on different machine learning
models. For our classification task, we have used a modern machine learning technique –
Adversarial Machine Learning.

Dept. ECE 29 Sapthagiri College Of Engineering-57,B’luru-57


Project Report: 2023-2024 Gesture Controlled Audio System

Figure 5.3 Architecture of Adversarial Learning Model Used

The main motivation behind using this model is to reach to a maximum possible accuracy even
if there is some alteration which has occurred in the data for training or testing phase by external
factors. The external factors can be anything or anyone who is trying to change the data and
expecting a wrong output, intentionally. Such people may be termed as hackers. The various
attacks that are more likely to occur for our deep learning models include fast-gradient sign
method (FGSM), basic iterative method (BIM) or momentum iterative method (MIM). The
earlier mentioned attacks are some pure form of many gradient based evading techniques that
attackers use for evading the classification model. Adversarial attacks take place when noise is
added to the data which in turn, while validating the already trained model may result in the
classification of false labels. The proposed model is used to prevent such attacks in order to
develop an efficient and robust ANN model which can rightfully classify labels despite being
feed with noisy data. The architecture of Adversarial Learning Model is illustrated in Figure
5.3.

Dept. ECE 30 Sapthagiri College Of Engineering-57,B’luru-57


Project Report: 2023-2024 Gesture Controlled Audio System

Chapter 6

RESULT

The entire code for the experiment was developed using Keras, a Python framework which is
used commonly for developing deep learning models. Additionally, other libraries used
consisted of Scikit-learn, pandas and NumPy. The experiment required the use of a computer
server with dedicated Graphical Processing Unit (GPU) as a result of which a machine with
AMD Ryzen 5 3550H processor and NVidia GeForce GTX 1050 graphics card was used.
The Adversarial Learning was found to have outperformed standard classifiers. The
comparisons were done by running all the machine learning algorithms on the dataset obtained
from our initial work. We have tried our best to keep all the necessary conditions as similar as
possible in order to have an ideal comparison of all of the models being compared.

Table 6.1 shows the performance matrix of ANN used before the application of Adversarial
Learning on our dataset:

Training Training Validation Validation


Acc. Loss Acc. Loss
82.42% 0.3519 78.83% 0.3640

Table 6.1: Performance Matrix For Simple ANN

Considering Model Accuracy graph in Figure 6.1 , it is evident from the figure that the model
is performing well. Initially, the Accuracy values are less for both Testing and Training but after
20 epochs, a sporadic increase in their value is observed. The values keep on increasing and
decreasing for the next 8-10 epochs. After 30 epochs, a pretty high accuracy values are observed
with not much change in them. This phenomenon continues up to next 70 epochs.

Dept. ECE 31 Sapthagiri College Of Engineering-57,B’luru-57


Project Report: 2023-2024 Gesture Controlled Audio System

Figure 6.1: Model Accuracy

Considering Model Loss graph in Figure 6.2, we can observe that Training Loss and Testing
loss are very much similar by value throughout the execution. This indicates us that the chances
of over-fitting and underfitting is extremely less.

Figure 6.2: Model Loss

Dept. ECE 32 Sapthagiri College Of Engineering-57,B’luru-57


Project Report: 2023-2024 Gesture Controlled Audio System

From the graph given in Figure 6.3, it is observed that the testing process records a
commendable accuracy of 88.32%.

Figure 6.3: Observations recorded from data testing after application of


Adversarial Learning the proposed model is better than traditional approaches, as illustrated in
Figure 6.3. This implies that the outcome of using the adversarial learning process was fruitful.

Figure 6.4: Adversarial Learning model

Dept. ECE 33 Sapthagiri College Of Engineering-57,B’luru-57


Project Report: 2023-2024 Gesture Controlled Audio System

However, it becomes more important to compare the results of this approach with some other
standard classifiers in order to draw some definite comparisons.

Models Accuracy(%) Precision(%) Recall(%) F1-


score(%)
PW 88.32 81.77 84.37 82.78
DT 84.79 72.84 73.74 76.32
SVM 83.04 72.39 71.26 75.70
kNN 84.79 72.41 72.76 75.13
LR 64.32 42.98 40.20 48.81
LDA 67.25 41.26 42.70 47.25
NB 73.68 62.90 61.50 66.97

Table 6.2: Performance Comparison

From the Table 6.2, it is quite evident that our proposed model has recorded the best
performance matrices as compared to the other standard classifiers. We also observe that the
results for Logistic Regression and Linear Discriminant Analysis are the worst in the lot. This
is primarily because of the fact that they are inconsistent when it comes to classifying multi-
class data set. Other classifiers’ performance was very mediocre.

6.1 Advantages of the proposed project

• Enhanced Accessibility: Hand gesture controls make audio systems more accessible to
individuals with mobility impairments or physical disabilities, allowing them to interact
with devices without needing to press buttons or turn knobs.

Dept. ECE 34 Sapthagiri College Of Engineering-57,B’luru-57


Project Report: 2023-2024 Gesture Controlled Audio System

• Hygienic Operation: Since gesture controls eliminate the need for physical contact, they
are more hygienic, reducing the risk of spreading germs and making them ideal for use in
public or shared environments.

• Intuitive Use: Gestures are a natural form of communication, making the system easy to
use without a steep learning curve. Users can quickly learn and remember simple gestures
to control audio functions.

• Convenience: Users can control audio playback from a distance, without needing to
physically reach the device. This is especially useful in large rooms, while cooking, or
during exercise.

• Modern User Experience: Gesture control adds a futuristic and sophisticated element to
audio systems, enhancing user engagement and satisfaction through innovative interaction
methods.

• Hands-Free Control: Gesture controls enable hands-free operation, allowing users to


perform tasks simultaneously, such as driving or cooking, without needing to stop and
manually adjust audio settings.

• Customization and Personalization: Gesture-based systems can be programmed to


recognize a variety of gestures, allowing users to customize commands to suit their
preferences and making the system more adaptable to individual needs.

• Reduced Wear and Tear: Since there are no physical buttons or dials being used, the
system experiences less mechanical wear and tear, potentially extending the lifespan of the
device and reducing maintenance costs.

Dept. ECE 35 Sapthagiri College Of Engineering-57,B’luru-57


Project Report: 2023-2024 Gesture Controlled Audio System

6.2 Disadvantages
• Calibration Sensitivity: Flex sensors require precise calibration to accurately detect and
interpret gestures. Any deviation can lead to incorrect commands, reducing the system’s
reliability and user satisfaction.

• Environmental Interference: Flex sensors can be affected by environmental factors such


as temperature and humidity, which may alter their resistance and affect their performance,
leading to inconsistent gesture recognition.

• Limited Gesture Range: Flex sensors primarily detect bending movements, which can
limit the range of recognizable gestures. Complex gestures involving multiple hand or
finger positions might be challenging to implement and detect accurately.

• Durability and Wear: Continuous bending and flexing can wear out the sensors over time,
reducing their lifespan and necessitating frequent replacements or maintenance, especially
in high-usage scenarios.
• Comfort and Ergonomics: Wearing gloves or attachments with integrated flex sensors for
extended periods can be uncomfortable or restrictive, potentially causing strain or
discomfort to the user.

• Power Consumption: Flex sensor-based systems, especially those integrated into


wearables, require continuous power supply for operation, which can lead to frequent
battery replacements or recharging needs, limiting their practicality.

• Cost: Implementing flex sensor technology can be expensive due to the need for high-
quality sensors, calibration equipment, and integration with existing audio systems, making
it less accessible for budget-conscious users or applications.

Dept. ECE 36 Sapthagiri College Of Engineering-57,B’luru-57


Project Report: 2023-2024 Gesture Controlled Audio System

• Complex Setup: Initial setup and integration of flex sensor-based gesture control systems
can be complex, requiring technical expertise to ensure proper functionality. Users without
technical knowledge may find it difficult to install and troubleshoot the system.

6.3 Applications of the proposed project

• Wearable Technology: Flex sensors embedded in gloves can be used to control audio
playback on wearable devices, such as smartwatches or fitness trackers, allowing users to
manage music and other audio without needing to touch the device.

• Rehabilitation and Physical Therapy: In physical therapy settings, patients can use flex
sensor-equipped gloves to control therapeutic audio or exercise instructions, providing an
engaging and interactive way to follow therapy regimens.

• Gaming and Virtual Reality: Flex sensors in VR gloves can be used to control in-game
audio, enhancing the immersive experience by allowing gamers to adjust sound settings or
switch audio tracks with hand gestures.

• Assistive Devices for Disabilities: For individuals with mobility impairments, flex sensors
can be integrated into assistive devices to enable gesture-based control of audio systems,
facilitating easier and more intuitive interactions with technology.

• Home Automation: Flex sensors can be part of a smart home system, where users wearing
gesture-detecting gloves can control home audio systems to play music, adjust volume, or
switch tracks seamlessly while performing other tasks.

• Industrial and Work Environments: In noisy or hands-on work environments, workers


can use gloves with flex sensors to control communication systems or audio devices,
allowing them to stay focused on their tasks without needing to handle electronic controls.

Dept. ECE 37 Sapthagiri College Of Engineering-57,B’luru-57


Project Report: 2023-2024 Gesture Controlled Audio System

• Educational Tools: In classrooms, teachers can use flex sensor-equipped gloves to control
audio-visual aids during lectures, making it easier to manage multimedia content while
engaging with students interactively.

• Automotive Controls: Drivers can wear gloves with flex sensors to control the car’s audio
system, enabling safe and convenient hands-free interaction to adjust volume, change
stations, or navigate playlists without taking their hands off the steering wheel.

6.4 Output of the Proposed project

Figure 6.5 : Working Model

The figure 6.5 shows the hardware implementation of the system.

Dept. ECE 38 Sapthagiri College Of Engineering-57,B’luru-57


Project Report: 2023-2024 Gesture Controlled Audio System

Chapter 7

CONCLUSION AND FUTURE SCOPE


Hand Gesture Recognition finds its impact on healthcare to be absolute and unprecedented.
This motivates us to develop a classification model for the same which shall be beneficial for
societal needs. Conclusively, we are looking forward to improve the results of the model by
increasing the sample size of the dataset extracted. Also, we attempt to improve the existing
algorithm by fine-tuning the existing model and its related constraints. An app will be
developed that will perform the conversion of hand gestures to speech. We sincerely feel that
this will be beneficial for those with speech disabilities.
Future advancements in hand gesture-controlled audio systems will likely focus on improving
gesture recognition accuracy, expanding gesture vocabulary, and integrating with artificial
intelligence for more intelligent and personalized interactions. Additionally, miniaturization of
sensors and advancements in wearable technology will enable seamless integration into
everyday attire, making gesture control even more convenient and ubiquitous. Moreover, there
is potential for integration with other emerging technologies such as augmented reality and
spatial computing, offering immersive audio experiences that respond dynamically to users'
gestures and movements, further enhancing the future of human-computer interaction.

Dept. ECE 39 Sapthagiri College Of Engineering-57,B’luru-57


Project Report: 2023-2024 Gesture Controlled Audio System

REFERENCES
[1] M. Alagu Sundaram, “A Review on Hand Gesture Recognition Techniques,
Methodologies, and Its Application,” International Journal of Computer Applications,
Volume 117(23), Pages 6-11.
[2] J. Li, “Real-Time Hand Gesture Recognition System for Human-Computer Interaction,"
IEEE Transactions on Human-Machine Systems, Volume 49(1), Pages 123-136.
[3] K. Saroha, "Gesture Recognition Using Machine Learning: A Review," Pattern
Recognition Letters, Volume 142, Pages 21-35.
[4] S. Velázquez, “Gesture Recognition in Ambient Assisted Living Environments: A
Review,” Sensors, Volume 19(14), Pages 3091-3112.
[5] A. Gupta, “Gesture-Controlled Music Playback Systems: A Survey,” Multimedia Tools
and Applications, Volume 79(13), Pages 9231-9255.
[6] Georgi, Marcus, Christoph Amma, and Tanja Schultz. ”Recognizing Hand and Finger
Ges-tures with IMU based Motion and EMG based Muscle Activity Sensing.” In
Biosignals, pp. 99-108. 2015.
[7] Ariyanto, Mochammad, Wahyu Caesarendra, Khusnul A. Mustaqim, Mohamad Irfan,
Jonny A. Pakpahan, Joga D. Setiawan, and Andri R. Winoto. ”Finger movement pattern
recogni-tion method using artificial neural network based on electromyography (EMG)
sensor.” In 2015 International Conference on Automation, Cognitive Science, Optics,
Micro Electro-Mechanical System, and Information Technology (ICACOMIT), pp. 12-
17. IEEE, 2015.
[8] Chuck, Jorgensen, D. Lee, and Shane Aga-bon. ”Sub Auditory Speech Recognition
Based on EMG/EPG Signals.” In Proceedings of the International Joint Conference on
Neural Networks, pp. 1098-7576. 2003.
[9] Zhang, Xu, Xiang Chen, Yun Li, Vuokko Lantz, Kongqiao Wang, and Jihai Yang. ”A
framework for hand gesture recognition based on accelerometer and EMG sensors.”
IEEE Transactions on Systems, Man, and Cybernetics-Part A: Systems and Humans 41,
no. 6 (2011): 1064- 1076.

Dept. ECE 40 Sapthagiri College Of Engineering-57,B’luru-57


Project Report: 2023-2024 Gesture Controlled Audio System

[10] Dixit, Dr Shantanu K., and Mr Nitin S. Shin-gi. ”Implementation of flex sensor and
electronic compass for hand gesture based wireless automation of material handling
robot.” International Journal of Scientific and Re-search Publications 2, no. 12 (2012):
1-3.

Dept. ECE 41 Sapthagiri College Of Engineering-57,B’luru-57


Project Report: 2023-2024 Gesture Controlled Audio System

APPENDIX

Dept. ECE 42 Sapthagiri College Of Engineering-57,B’luru-57


A Project Report: 2023-24 Gesture Controlled Audio System

Dept. of ECE 43 Sapthagiri College of Engineering, Bangaluru- 57

You might also like