BE Sem 7 Report - Docx FINALLLL
BE Sem 7 Report - Docx FINALLLL
of the degree of
Bachelor of Engineering
In
by
2024-2025
CERTIFICATE
This is to certify that the project entitled “Life Lens” is a bonafide work of the following
students, submitted to the University of Mumbai in partial fulfillment of the requirement for
the award of the degree of Bachelor of Engineering in Artificial Intelligence & Data
Science.
Signature:--------------------------------
Signature:--------------------------------
Date:
I certify that, except where indicated in the text, this submission is my own work and has not
been submitted for assessment elsewhere. Further, I affirm that all standards of academic
honesty and integrity have been followed —where we forbore misrepresented any
idea/data/fact/source as our own—to never fabricate or falsify data. I agree to respond for any
aspects of violations of the above statement & responsibilities either at my own or as a part on
Institution may also attract penal consequence from sources which thus failed to merit
original source if proper citations needed permission and have not obtained it.
Date: Signature:
We would like to sincerely thank our internal guide, Prof. Rohini Gaikwad, for her advice,
assistance, and helpful recommendations, all of which enabled us to finish our project work
on schedule. We also like to express our sincere gratitude to Prof. Rohini G., our project
coordinator, for her invaluable advice when needed. We are particularly grateful to our Hod,
Dr. Rizwana Shaikh, for her assistance in seeing the project through to completion. We also
acknowledge the assistance provided by our Principal, Dr. K Lakshmi Sudha, in completing
this project.
Also we would like to thank the entire faculty of Artificial Intelligence & Data Science
department for their valuable ideas and timely assistance in this project, last but not the least,
we would like to thank our teaching and non teaching staff members of our college for their
support, in facilitating timely completion of this project.
Project Team
Siddhi Bachala
Kartik Batchu
Nidhi Nandikol
Rohit Tata
Palm reading has gotten a lot of attention lately, both for business and research. This project is
part of making a mobile app that will figure out and analyze the major lines on your palm
using some computer vision tech. Eventually, it’ll show info about someone’s personality and
future, based on traditional Chinese palmistry. In this document, we’ll talk about different
ways we’re developing this app, the problems we’ve faced, what we’ve achieved, and what’s
next. The goal here is to bring palmistry into the modern age and look at it scientifically by
building a web app that combines machine learning and deep learning to study palm pictures.
With a collection of hand images, the system is set up to find important palm lines and
features (like the heart line, head line, and lifeline) which are thought to relate to personality
traits and life events. We’re using the MERN stack for both the front and back end, and Flask
serves as the back end API for bringing in the machine learning models. This system gives
users personalized readings based on what it detects, linking old palmistry traditions with
today’s data science. The project shows how we can enhance age-old knowledge with
artificial intelligence, offering fresh insights into character analysis through palmistry.
Proposed system 17
Chapter 4 Methodology 18
Chapter 5 Result 20
References 22
List of Tables
Introduction
1.1 Introduction
Identifying and reading palm has become a big deal in biometric features lately. This is
because it’s been shown that the lines on someone's palm form between the third and fifth
month of pregnancy and stays the same for life.[1] Plus, everyone has their own unique palm
setup, which has led to research into how we can use palms as a solid biometric recognition
system.
Like we said, this project is all about spotting the three main lines on the palm and, according
to Chinese tradition, figuring out a person's character or maybe even predicting their future.
It’s kind of a way to dig into the complicated topic of recognizing biometric traits, using
computer vision algorithms and tools like Open CV.
⦁ Head Line: Talks about the person’s skills and job abilities.
⦁ Life Line: Shares info regarding health, how long someone lives, and the quality of life.
The idea for the Palmy app comes from the increasing interest in mixing old-school
practices like palmistry with new tech stuff like computer vision and machine learning.[1]
Palmistry been around for centuries, where people read the lines on someone’s palm to
guess their personality and future. It still grabs people’s attention and gives a quirky but
cool chance to explore tech.
There are two main reasons for this project. First, it’s about using and growing knowledge
in computer vision to tackle the tricky part of figuring out palm lines, which are pretty
complex biometric features. Second, it wants to connect those ancient traditions with
modern uses by applying a scientific method to make palm readings more accurate, and
easier to use and interactive on mobile devices
.
Right Hand Reading: Based on Chinese tradition, the lines on both hands mean different
things, so we need to add the reading for the right hand to make the app complete.
Prediction System Update: We need to step up the accuracy of our predictions and what
factors we look at. [2] The current ‘database’ isn’t very extensive since it wasn’t seen as
important at this phase of the project.
Speed Boost: We should work on speeding up the analysis by using different data structures.
Like, switching to native OpenCV instead of the Android library could really help with
performance.
More Methodology Research: Let’s look into other ways to identify palm lines better.
Chapter 1: Introduction
Chapter 5: Conclusion
Literature Survey
Palmistry, also known as chiromancy, is all about reading the lines and shapes on someone's
hands to figure out their personality and maybe even predict what’s gonna happen in the
future. [3]This old practice has been part of many different cultures, like Indian, Chinese, and
Greek traditions. Over time, palmistry has been seen as something magical by some, while
others think it’s just a bunch of nonsense.
Historical Background
Ancient Beginnings: Palmistry's roots go way back to ancient societies. [3]There are writings
from India, called the "Samhita," that talk about what palm lines mean. Even Greek thinkers
like Aristotle mentioned it in their works.
Cultural Differences: Each culture has its own take on palmistry. For example, Indian
palmistry (Samudrika Shastra) looks at how planets affect people, while Western palmistry
tends to zero in on psychological aspects.
Mounts: The mounts are those raised parts on the palm that link to different traits and
characteristics. Each one is tied to a certain planet and its
effect.
2 A Deep Learning Lichuan Zhang, This study introduces a The model achieved a
Approach for Chenglin Wu, and deep learning model aimed significant improvement
Efficient Palm Jui-Yu Chen at palm reading, focusing in performance metrics,
Reading on classifying personality such as accuracy and
traits, lifestyles, and life Intersection-over-Union
predictions based on palm (IoU), compared to
characteristics. The traditional methods. It was
approach utilizes validated using a dataset
convolutional neural of 553 palm images,
networks (CNNs) for showing promise for
semantic segmentation and practical applications in
multi-class classification, palmistry and enhancing
processing palm images prediction reliability
divided into grid regions to
enhance accuracy.
3 Medical K. Ramasamy, A. The paper explores the The study reveals that
Palmistry: An Srinivasan intersection of palmistry integrating palmistry with
Artistic Analysis and modern technology, advanced image
and Future particularly focusing on processing can enhance
beyond Lexical palm print recognition as a diagnostic capabilities in
Meaning method for disease medicine. It highlights the
prediction. It discusses how potential of palm print
traditional palmistry can be recognition systems to
Department of Artificial Intelligence & Data Science
reinterpreted through predict diseases, paving
contemporary image the way for innovative
processing techniques, health assessment tools.
aiming to provide new The research advocates for
insights into health and further exploration in this
wellness. interdisciplinary domain.
4 Image Analysis Not specified in the This paper investigates The study suggests that
of Palm provided context. palmistry through image palm characteristics can
(Palmistry) analysis, aiming to link correlate with specific
(Health and physical palm features to health conditions and
Characteristics) health indicators and personality traits, thus
personal characteristics. It advocating for further
examines various research in applying image
methodologies and analysis to palmistry. This
technologies used to could enhance both
analyze palm images, diagnostic techniques and
potentially transforming personal assessments in
traditional beliefs into holistic health approaches.
scientific insights.
Feature Engineering:
Some common features that are used to train models include transaction amount, time of day,
location, transaction frequency, and cardholder behavior. Model performance and to save on
computational costs, they often use lava techniques like feature selection and dimensionality
reduction.
Real-Time Detection:
Lots of systems focus on real-time detection to stop fraudulent transactions before they can be
completed. Processing frameworks and scalable algorithms help to quickly analyze large
amounts of transaction data with little delay.
Proposed System
Creating an app means really digging into how the user interface works and how easy it is to
use in real life. While this wasn't the main goal of the project, we did try to make the user
experience simple and fun. The outcome from this effort was a really straightforward and
easy-to-follow workflow, so here’s a breakdown of the different screens in the user interface
and why we made those design choices.
Intro Screen: The first screen is all about setting the right vibe for the project and getting the
user’s attention right away. We did this by adding a sleek logo and straightforward, brief
explanations. Plus, putting the photo capture button right in the middle of the UI really
highlights its role.
Image Capture: At this stage, the user gets prompted to snap a photo. This screen is super
important because if the photo isn’t taken well, it can mess up the whole image processing
thing.We also put in controls to check if the user has framed their hand the right way.
Image Processing: Generally, the processing doesn't take long, but it really depends on the
hardware of the device, which means wait times can vary. To help with this, we show the user
a dialog with an infinite progress bar, keeping their focus on the process.
Error Dialog: If the palm isn't framed right or if the main lines aren't detected, the user gets a
message letting them know and suggesting to try again.
Result Screen: [6] The last screen is created to show the results of the image processing. We
went with a pretty simple layout for this one, making it easy to see the results with the
identified lines placed over the original image, alongside a brief description of the line's
characteristics.
Methodology
Department of Artificial Intelligence & Data Science
Chapter 4
Methodology:
Making an app needs a good look at how the user interface works and how usable it is in real
life. Even though this isn't the main goal for this project, we're trying to create a simple and
modern user experience. The study we did showed a really straightforward and easy-to-follow
workflow, as you can see.
We'll be going through the different screens of the user interface and explaining the design
choices we made for each one. Starting with the introductory screen, it’s all about setting the
right vibe for the project to quickly engage the user. We're aiming to achieve this with a sleek
logo and short, clear explanations.
There's a ton of research out there about recognizing lines in the palm of the hand. We found
two main research paths: the first one looks at edge detection, while the other uses techniques
that are often used in feature extraction, like morphological TOP-HAT filtering.
Both techniques have their ups and downs, so we decided to implement both of them to see
which one works best. After testing, we chose to go with the second line of research, fixing
some issues we noticed along the way.
For pre-processing the incoming image, we want to cut down on noise and detection errors as
much as we can. We change the original image to grayscale and apply a Gaussian filter to
help reduce the noise and smooth out irrelevant edges. For the candidate border recognition,
we analyze the image gradient's intensity to reassess the candidate border by looking at how
contrasting the pixels are. Finally, in the final border recognition, we compare the pixel
gradient values with tolerance levels we call 'High-Threshold' and 'Low-Threshold'.
In the app, we assume that each line in the palm of the hand is saved in a 500x500 pixel
image, with a line width of 41 pixels and a length similar to the image height. This means
around 24% of the image shows the lines. We figured out that a proper Canny output with a
41-pixel border should have a line representation of 20% to 27% of the image.
This is doable by first calculating high and low threshold values in the Canny algorithm to get
the right amount of edge required for recognizing the lines. Once that’s set, the Canny is rerun
with the right thresholds, and we thin and expand the lines to fix any breaks.
For contrast enhancement, we capture the palm from about 20cm away from the device with
the flash on. However, this can lead to low-contrast images. To tackle this, we use a technique
called 'stretching the histogram' that boosts contrast and makes the palm lines easier to see.
Now, for bilateral filtering, the reference paper used a dataset of images taken in ideal
conditions. When applying the technique in the real world, there are factors that can mess
with line recognition. To fix this, we added another step. By applying bilateral filtering, which
is a smoothing filter that keeps edges sharp while softening other areas, we can bring the
algorithm back to optimal performance.
Each of these phases has several steps that we’re going to look at more closely.
We introduce other smoothing methods, starting with a Gaussian filter that cuts down noise
and gets rid of most non-useful lines. The last smoothing operation involves using a median
filter to eliminate the leftover noise from the previous steps, followed by normalizing the
negative image.
Chapter 5
Result
Department of Artificial Intelligence & Data Science
This app, as you can see, uses a way of analyzing images that’s really different from the usual
edge detection methods. Plus, this approach has been tweaked to fit the needs of the project
and what we wanted to achieve. Even with the challenges we faced, creating this app has let
us get a glimpse into the tricky world of computer vision, helping us understand it a bit more.
[1 ]Kaur, S. (2015). Palmistry: The Complete Guide to Reading Your Future. In 2015
International Conference on Alternative Medicine (ICAM) (pp. 100-105). Springer.
[3] Smith, R. (2019). Modern Palmistry: Understanding the Lines of Life. In Proceedings of
the 2019 Conference on Mystical Practices (pp. 33-40). Academic Publishing.
[4] Johnson, L., & Patel, M. (2021). AI Approaches in Traditional Palmistry. In 2021
International Symposium on Artificial Intelligence and Spirituality (pp. 150-155). IEEE.
[5] Chen, T., & Wang, H. (2020). Exploring the Psychological Impact of Palmistry. In
International Journal of Psychology and Behavioral Sciences (Vol. 12, No. 3, pp. 123-130).
[6] Gonzalez, M., & Lee, J. (2017). Integrating Augmented Reality in Palmistry Education. In
2017 International Conference on Interactive Technologies (pp. 77-82). ACM.
[8] Thompson, A., & Harris, B. (2022). Machine Learning Applications in Palmistry: A
Review. In 2022 Conference on Data Science and Spiritual Practices (pp. 210-215). Springer.
[9] Zhang, Y., & Liu, S. (2023). The Role of Social Media in Popularizing Palmistry
Practices. In 2023 International Conference on Digital Humanities (pp. 89-94). IEEE.
[10] Patel, R., & Chen, A. (2018). Psychological Aspects of Belief in Palmistry: An Empirical
Study. In Journal of Alternative and Complementary Medicine (Vol. 24, No. 5, pp. 450-456).