Report Final (2)
Report Final (2)
PROJECT REPORT
Submitted by
AADHITHYANARAYANAN V A
ASI19CS001
AJAY ANTU
ASI19CS018
ASHNA SAJU
ASI19CS037
To
MAY 2023
BLOCKCHAIN ENABLED SOCIAL NETWORKING
APPLICATION WITH AUTOMATED CONTENT
FILTERING
PROJECT REPORT
Submitted by
AADHITHYANARAYANAN V A
ASI19CS001
AJAY ANTU
ASI19CS018
ASHNA SAJU
ASI19CS037
To
CERTIFICATE
Submitted by
AADHITHYANARAYANAN V A
ASI19CS001
AJAY ANTU
ASI19CS018
ASHNA SAJU
ASI19CS037
We undersigned hereby declare that the project report “Blockchain Enabled Social
Networking Application with Automated Content Filtering” , submitted for partial
fulfillment of the requirements for the award of degree of Bachelor of Technology of APJ
Abdul Kalam Technological University, Kerala is a bonafide work done by us under
supervision of Prof. Ajay Basil Varghese. This submission represents our ideas in our own
words and where ideas or words of others have been included, we have adequately and
accurately cited and referenced the original sources. We also declare that we have adhered to
ethics of academic honesty and integrity and have not misrepresented or fabricated any data
or idea or fact or source in our submission. We understand that any violationof the above will
be a cause for disciplinary action by the institute and/or the University and can also evoke
penal action from the sources which have thus not been properly cited or from whom proper
permission has not been obtained. This report has not been previously formed the basis for
the award of any degree, diploma or similar title of any other University.
AADHITHYANARAYANAN V A
AJAY ANTU
ASHNA SAJU
ACKNOWLEDGMENT
As the very outset we would like to give the first honors to God, who gave the wisdom and
knowledge to complete this project.
Our extreme thanks to Dr. Sreepriya S., Principal for providing the necessary facilities for
the completion of this project in our college.
We sincerely extend our thanks to Prof . R. Rajaram, Dean Projects & Consultancy for all
the help, motivation and guidance throughout the completion of this project.
We also like to extend our gratitude to Prof. Manesh T. HOD CSE for all the help,
motivation and guidance throughout the project.
We wish to extend our sincere thanks to the project coordinators Prof. Divya K.S. and
Prof. Gripsy Paul and our Project Guide Prof. Ajay Basil Varghese, for their valuable
guidance and support throughout the project.
We also wish to thank all teaching and non-teaching faculty of Computer Science and
Engineering department for their cooperation.
We also thank our Parents, friends and all well-wishers who supported directly or
indirectly during the course of this project.
VISION AND MISSION OF THE DEPARTMENT
VISION
Nurturing globally competent Computer Science and Engineering graduates capable of taking
challenges in the industry and Research & Development activities.
MISSION
● Imparting quality education to meet the needs of industry, and to achieve excellence in
teaching and learning.
● Inculcating value-based, socially committed professionalism for development of society.
● Providing support to promote quality research.
ABSTRACT
8.3 Profile 78
CT Ciphertext
Chapter 1
INTRODUCTION
1.1 Background
Social media platforms are growing in popularity. Platforms such as Twitter and
Facebook have changed the way people around the world communicate by providing a
comprehensive system for sharing ideas, starting commerce, and pitching his ideas for new
careers. People can use social media platforms to share information, greatly enhancing
communication and contact. Connect with new organizations, find former classmates and
friends, and find people with similar interests across political, economic, and geographic
boundaries. As a result, social media makes it easy to share knowledge among millions of
Internet users around the world.
However, social media has certain limitations. Academics, government officials, and
users have raised many important issues, including widespread control by a few companies,
the spread of misinformation, debates over limited or unrestricted dialogue, breaches of
confidentiality, and political restrictions. Recognizing using personal information on social
media raises privacy concerns and poses security risks. Many experts are already working
on solutions to this problem by integrating blockchain technology and social media. As a
response to significant privacy issues over social media, fake news, and censorship, the
decentralisation of social services has gained popularity in recent years and numerous firms
have worked together to create noteworthy breakthroughs for implementing blockchain on
social media.
The most popular decentralized method in use today is blockchain technology, which
has been explored in creating the next generation of decentralized social websites. Apart
from bitcoin, the application of blockchain has extended too many areas recently. Using
blockchain on social media has many benefits, such as improving user privacy, bypassing
restrictions, and allowing users to conduct cryptocurrency transactions on social media
platforms. The conceptof data protection, which usually refers to the protection of data by an
individual or group that should not be known to outsiders, is a very complex one. Critical
information is protected in a decentralized data store created by blockchain, making it
extremely impossible to hack the data. Blockchains used in social networks offer the
advantage of secure authentication while maintaining anonymity for people living in
repressive regimes or areas where censorship is a problem. By providing a transparent,
immutable, and certifiable registry of operations and creating a secure peer-to-peer
environment for storing and transferring data, blockchain ensuresprovenance,
trustworthiness, and traceability of data we can guarantee it.
The key problems to be addressed by this project include:
Inappropriate Content Filtering: Existing systems struggle to efficiently and accurately
filter out inappropriate images, comments, and text from user-generated content.
Manual moderation processes are time-consuming and prone to inconsistencies,
leading to delays in removing objectionable content and exposing users to harmful
material.
Data Privacy and Security: Centralized storage architectures put user data at risk of
unauthorized access, hacking attempts, and data breaches. Users are increasingly
concerned about their privacy and the security of their personal information shared on
social networking platforms.
Lack of Transparency in Content Moderation: Users often have limited visibility into
the content moderation processes employed by social networking platforms. The lack
of transparency raises concerns about biased decision-making, censorship, and the
inconsistent application of content guidelines.
1.4 Scope
The scope of the project encompasses the development of a blockchain-enabled
social networking application with automated content filtering, utilizing deep learning
models for image filtering and natural language processing models for text filtering. The
project aims to address the limitations of existing social networking platforms by
providing a safer and more transparent user experience. The following aspects fall within
the scope of the project:
User Registration and Profile Management: Users will be able to register accounts,
create profiles, and manage their personal information. This includes features such as
profile picture upload, bio, and other relevant details.
Social Networking Functionality: The application will provide standard social
networking features, including the ability to post content, like and comment on posts,
follow other users, and engage in private messaging.
Decentralized Storage: The project will integrate blockchain technology and IPFS for
decentralized storage of user-generated content, ensuring data resilience, transparency,
and censorship resistance. Content, including images, comments, and text, will be
stored securely on the blockchain and IPFS.
Automated Content Filtering: Deep learning models will be implemented to
automatically filter out inappropriate images, offensive comments, and text content.
The image filtering model will analyze uploaded images to identify objectionable
content, while the text filtering model will analyze text-based content for offensive
language, hate speech, or inappropriate comments.
User Interface and User Experience (UI/UX): The application will have an intuitive
and user-friendly interface, providing a seamless and engaging experience for users.
Attention will be given to designing an appealing and responsive UI/UX, considering
factors such as ease of navigation, content visibility, and overall aesthetic appeal.
Privacy and Security: The project will prioritize user privacy and implement
appropriate security measures to safeguard user data. This includes encryption
techniques, secure user authentication, and adherence to data protection best practices.
Chapter 2
LITERATURE SURVEY
identified gaps and enhance privacy protection in IoT environments. By addressing the
challenges of privacy threats and compliance with privacy standards, this research
contributes to the development of robust and privacy-conscious IoT systems, ensuring the
protection of users' personal data in an increasingly interconnected world.
The results demonstrate that the deep learning-based approach outperforms traditional
methods in detecting and classifying inappropriate content in YouTube videos. It showcases
the potential of leveraging deep learning techniques for content filtering and moderation on
video-sharing platforms. The proposed approach not only enhances the accuracy and
efficiency of identifying inappropriate content but also allows for automated and scalable
content moderation, thus promoting a safer and more secure environment for users.
Overall, this paper presents an advanced deep learning-based approach for
inappropriate content detection and classification in YouTube videos. The findings
contribute to the ongoing efforts in developing effective content moderation mechanisms to
mitigate the risks associated with user-generated content on video-sharing platforms.
suggested model classifies each tweet into one of three groups (Hate speech, Offensive and
Neutral). For classification it uses POS tagging characteristics, sentiment polarity scores and
word embedding. It is used in order to improve the representation of tweets. The word
Embedding and the POS feature are not concatenated instead the dot product is computed to
extract just adjectives, adverbs, verbs, and nouns.
The IPFS is utilized as the underlying distributed file system, providing a resilient and
decentralized storage infrastructure for EMRs. The scheme enhances the security of EMRs by
breaking them into fragments and distributing them across multiple IPFS nodes, thereby
reducing the risk of unauthorized access or data loss. Moreover, the use of IPFS allows for
efficient and scalable retrieval of EMRs, promoting interoperability between healthcare
providers.
Overall, the paper highlights the potential of blockchain technology and IPFS in
addressing the security and privacy challenges associated with EMR storage and access. The
proposed scheme offers a promising solution for healthcare systems, enabling secure and
efficient management of EMRs while ensuring patient data confidentiality and integrity.
Chapter 3
PROPOSED SYSTEM
3.1 Objective
The project aims to develop a social media website based on blockchain. There will
be a content filtering mechanism that will filter the contents uploaded by the users using
deep learning mechanism and remove inappropriate contents for all age groups
automatically. Comment section will be available under all the posts and the comments will
be filtered using an NLP model before getting posted. Chat option will be provided for
interaction between twousers and it will be end to end encrypted. It will provide maximum
security for user data as we are not directly saving any of their personal information and the
contents that are getting posted.It supports the option for users with an account and without
an account. For users without an account there will be option to view the post and only the
users with an account will get to access the remaining functionalities. The user’s personal
data and details of content posted willbe saved in a blockchain network.
3.2 Methodology
3.2.1 System Architecture
The project consists of a frontend and a backend. The frontend is the part which
providesthe users an interface to access the platforms functionalities. The platform consists
of features like adding a post, viewing a post, commenting on a post, real-time chatting, user
registration and authentication etc. as shown in Fig 3.1.When the user visits the platform,
he/she will have to undergo a registration process to use all the functionalities. The
unregistered users will only get to view the posts and the platform in general. Once after
registration user can view the post and if theywant to add a post it is also possible for the
registered user. The user can also add a comment on any of the posts. Once the user adds a
post or a comment, it is sent to the backend for processing. On reaching the backend the data
in the post and comments will be send to a filtering mechanism that consist of deep learning
and NLP models. The models are trained such that it can filter all the inappropriate and
violent contents in the post and comments. If the post/comment passes the filtering
mechanism, it is then sent to IPFS(Interplanetary FileSystem) for distributed storage.
IPFS stores the data in a distributed manner. Blockchain is used with IPFS because it
supports file traceability metadata on a distributed file system like IPFS. IPFS assures us
that the data included in this network are unique (they are uniquely identified by an
identifier) and are protected against modifications, making this data immutable. In the event
that this data is changed, a new “hash” identifier is generated, which would not coincide
with the one stored inthe blockchain for the recorded data. After that the hash is it is stored
in the blockchain throughsmart contracts.
There is also an admin who can track only the activities that is happening in the
platform. The admin will not have control over the content that is posted on the platform.
Follow User
User is able to follow other accounts in the platform.
On following a user, the user’s public key will be mapped to the public key of
the followers .It can be retrieved by using the user’s public key.
Chat Messaging
Allows users to chat with other users.
A user has to follow the other user to chat with them
All the chats will be saved in the blockchain and can be retrieved by using a
unique chat code.
If user has not created a profile, a new profile will be created. Else existing
information will be updated.
Each time a post is created, the post will be mapped to their public key in the
blockchain and it can be retrieved by using the same key.
Text Filtering
Taking the contents like post text and comments
Using NLP to classify the textual data into appropriate and inappropriate
contents
Preventing any inappropriate text data from getting stored in the blockchain
Image Filtering
Taking the contents like images uploaded.
Applying deep learning algorithm on the image to find any violence
content (blood shed, gun, weapons) in images posted.
Preventing any inappropriate images from getting stored in the IPFS
IPFS storage
Blockchain has a limitation in terms of storage. So large files like images are
stored on the IPFS.
IPFS is a decentralized and distributed storage solution to enable efficient
storage and retrieval of data while maintaining data integrity and
decentralization.It optimizes the blockchain storage.
Environmental Requirements
Each user must have a metamask account for login.
Whenever a user uploads a post in the platform then the filtering mechanism
should check if the contents are appropriate for the platform.
Identify specific requirements for text filtering and image filtering, such as
identifying and filtering toxic comments and detecting/filtering inappropriate or
violent images.
System Design:
Design the overall system architecture, including the integration of text filtering
and image filtering modules.
Define the interactions and interfaces between different modules, such as login
using MetaMask, chat functionality, blockchain data storage and retrieval, IPFS
data storage for images, text filtering, and image filtering.
Implementation:
Develop the front-end using technologies like React.js, HTML, CSS, and
JavaScript for user interfaces, including login, chat, post creation, comment
management, and viewing functionalities.
Code the smart contracts using Solidity for blockchain data storage and
retrieval.
Integrate MetaMask for secure authentication and interaction with the Ethereum
blockchain.
Implement text filtering using Python and libraries such as Pandas, Scikit-Learn,
and NLTK for natural language processing tasks.
Implement image filtering using deep learning frameworks like TensorFlow and
Keras.
Test the login module using MetaMask for seamless integration and secure
authentication.
Test the chat module for real-time messaging, user interactions, and data
synchronization.
Validate the accuracy and effectiveness of the text filtering module using
evaluation metrics like accuracy, precision, recall, and F1-score.
Deployment:
Host Image Filtering Model using Railway and Text Filtering Model is
deployed in a local system with NGrock
Host the front-end components on platforms like Netlify for user accessibility.
Integrate IPFS for storing images, with the IPFS hashes stored in the blockchain.
Regularly update the text filtering and image filtering modules to adapt to
changing content patterns and user requirements.
Cost of Development: The project's initial cost includes expenses related to software
development, infrastructure setup, and acquiring necessary technologies. This may
involve investments in blockchain development frameworks, deep learning libraries,
and natural language processing tools. Additionally, costs associated with designing an
intuitive user interface, conducting testing, and ensuring compliance with data privacy
regulations should be considered.
Growth Opportunities: This includes evaluating the feasibility of expanding the user
base, introducing new features, exploring partnerships.Growth opportunities contribute
to the long-term economic feasibility and sustainability of the project.
Chapter 4
SYSTEM DESIGN
Chapter 5
ARCHITECTURE
Clearly define the purpose and goals of your DApp. For example, you may want to
create a decentralized crowdfunding platform or a decentralized file storage
system.Determine the specific functionalities and features your DApp should offer. Consider
factors such as user registration, transaction processing, data storage, etc.
This phase includes selecting a blockchain platform that aligns with the DApp
requirements. This project uses Ethereum based blockchain. Ethereum has become a leading
platform for blockchain-based applications, providing a foundation for decentralized finance,
non-fungible tokens (NFTs), and a wide range of innovative DApps. Its open-source nature,
extensive developer community, and continuous evolution make Ethereum a prominent force
in the blockchain industry, driving the adoption of decentralized technologies and reshaping
various sectors of the global economy. Alternative platforms like EOS, Tron, or Binance
Smart Chain can be used based on factors such as scalability, transaction costs, and
community support.
logic and functionalities of the DApp is written. This includes defining functions that handle
user interactions, perform calculations, enforce rules, and update the contract state.As smart
contracts are crucial for working of blockchain, It should be properly written free of any bugs
and errors. The contract should be properly audited.
This phase includes testing the smart contracts to check if all the functionalities are
properly working. Tools like remix, hardhat, truffle can be used to create tests for the smart
contract to test it in various aspects.
After the smart contract has been developed and tested the next phase involves
deploying it in a blockchain platform. Tools like Remix, Truffle, hardhat can be used to
compile and deploy the smart contracts. First for testing purpose the smart contract is
deployed on ganache which is a tool for creating a local blockchain. Ganache also gives test
accounts using which we can test the functionalities of the DApp.Then the smart contract is
deployed on Ethereum based testnets like sepolia. The testnets are similar to real blockchain
network.Once properly tested the smart contract can be deployed in the Ethereum mainnet.
F. Frontend Development
The frontend for the application can be developed using Frameworks like React.js,
Angular or Vue. Js. The project uses React.js. React.js is a popular JavaScript library used for
building user interfaces (UIs) for web applications. It was developed by Facebook and
released as an open-source project. React allows developers to create reusable UI components
and efficiently manage the application's state, resulting in scalable and high-performing web
applications.Libraries such as Web3 js, Ethers js can be used to interact with smart contracts
from the frontend. This project uses Ethers js. Ethers.js simplifies the process of interacting
with smart contracts. It provides a Contract API that allows developers to interact with smart
contracts by creating contract instances from their ABI (Application Binary Interface).
Developers can easily call contract functions, send transactions, and listen to contract
events.Metamask is used in the frontend to interact with blockchain. MetaMask is a popular
browser extension and cryptocurrency wallet that allows users to interact with Ethereum-
based decentralized applications (DApps) directly from their web browsers.When interacting
with DApps, MetaMask securely signs Ethereum transactions on the user's behalf. It prompts
the user to review and approve transaction details before signing, ensuring that users have full
control over their funds and preventing unauthorized transactions.
A. Install Dependencies
Install the necessary dependencies for working with IPFS and web3.storage, such as the
IPFS JavaScript library and the web3.storage client library.
B. Connect to web3.storage API
Connect to the web3.storage API using the web3.storage client library. This allows you
to interact with the web3.storage service for storing and retrieving data.
C. Initialise IPFS
The access token required to authenticate and acces the IPFS network is retrieved.A new
instance of web3 storage client is created and initialized with the access token.
D. Upload Image
Obtain the image file to be uploaded from the user post. Upload the image to IPFS .Use the
web3.storage client library to store the Blob object on IPFS. This will upload the image to
IPFS and return a CID (Content Identifier) that uniquely identifies the stored image.
E. Store CID in Blockchain
Once you have the CID for the uploaded image, you can store it in the blockchain. This
can be done by invoking a smart contract function or making a transaction that stores the CID
in the blockchain along with the post
F. Retrieving Images
To retrieve an image, you can retrieve the CID from the blockchain and use the
web3.storage client library to retrieve the image data from IPFS based on the CID.
Python is used for the implementation, and a number of libraries, including Pandas and
Scikit-Learn, are used. The Google Collaboratory, a web-based platform that offers an
integrated development environment for Jupyter notebooks, is used to build the technique.
We used a toxic comment detection dataset that includes a wide variety of remarks that
have been classified as hazardous or non-toxic. To reduce noise, such as HTML elements,
punctuation, and special characters, the dataset was preprocessed. To reduce dimensionality,
stop words were also removed, and the remaining text was tokenized and lemmatized.
B. Feature Extraction
C. Classificarion Model
Using the Logistic Regression technique and the one-vsrest strategy, we were able to
categorise comments into harmful and non-toxic categories. A popular binaryclassification
methodology, logistic regression, is extended by the one-vs-rest method to successfully
handle multiclassclassification tasks. Using the pre-processed text input and related labels, the
model was trained.
We used the Open Images V6 dataset of violent images, which includes a wide range of
pictures classified as violent or not. We also included photos from the ImageNet dataset and
the Iris dataset to improve the model's capability to reliably detect and classify different visual
components. The datasets were pre-processed by normalising the pixel values and resizing the
photos to a constant resolution..
B. Model Architecture
Three convolutional layers with 16, 32, and 16 filters a piece, each employing a 3x3
kernel and a stride of 1, make up the architecture. After each convolutional layer, the ReLU
activation function is used to add non-linearity. After each convolutional layer, max pooling
layers with a default 2x2 pool size are added to limit the number of spatial dimensions and
capture significant features. Flattened output feature maps are coupled to a dense layer of 256
units that uses the ReLU activation function. To lessen overfitting, a dropout layer with a
predetermined dropout rate is implemented.Finally, binary classification is performed using a
dense layer with a single unit and sigmoid activation.
Using the pre-processed dataset, the image filtering model was trained. Mini-batch
training is a technique we used to update the model's weights by using a portion of the dataset
in each training iteration. The model's loss function was minimised and its predictive power
was increased by using an optimisation technique like stochastic gradient descent (SGD) or
Adam. The performance of the model was optimised through hyperparameter adjustment,
which included learning rate, batch size, and number of epochs.
We have used a common assessment criterion, such as accuracy, precision, recall, and
F1-score, to assess the effectiveness of the image filtering model. A hold-out validation set or
cross-validation methods were used to validate the model in order to evaluate its robustness
and generalizability.
Size: The dataset consists of a large number of comments, typically ranging from
thousands to millions, depending on the specific dataset chosen. The size of the dataset
allows for robust training and evaluation of the NLP model.
Comment Text: Each comment in the dataset is represented as a text string. The
comments are usually sourced from online platforms, such as social media platforms or
discussion forums, where toxic comments are prevalent.
Labels: The comments in the dataset are labelled to indicate their toxicity. Commonly
used labels include binary labels (toxic/non-toxic) or multi-class labels (e.g., toxic,
severe toxic, obscene, threat, insult, identity hate) to capture different types of toxicity.
The labels are assigned by human annotators who review and classify the comments
based on their content.
The toxic comment detection dataset serves as the foundation for training and
evaluating NLP models that aim to identify and classify toxic comments accurately. It
provides a representative sample of real-world comments with labelled toxicity,
enabling the development of effective models for automated toxic comment detection
and moderation.
Sources:
OpenImages V6: This dataset contains a large collection of images with annotations for
various object classes. Images that depict violent scenes or contain objects related to
violence are selected from the OpenImages V6 dataset to contribute to the violence
image detection dataset.
Iris dataset: The Iris dataset is another source of images used for violence image
detection. It provides a curated collection of images with annotations specifically
focused on violence-related content.
ImageNet: ImageNet is a widely used dataset for object recognition. Images from the
ImageNet dataset that are relevant to violent content or contain objects related to
violence are included in the violence image detection dataset.
Data Diversity: The dataset aims to capture a diverse range of violent content, including
images depicting physical violence, weapons, aggressive behavior, or scenes associated
with violent acts. The inclusion of various types of violence allows the model to learn
patterns and features specific to different violent behaviors.
Dataset Size: The violence image detection dataset can vary in size, depending on the
number of images selected from the sources. It is typically designed to provide a
sufficient number of training and testing samples to train and evaluate the violence
image detection model effectively.
Preprocessing: The images in the dataset may undergo preprocessing steps such as
resizing, cropping, or normalization to ensure they are in a standardized format for
training and evaluation.
The violence image detection dataset, created using the OpenImages V6 dataset, Iris
dataset, and ImageNet dataset, provides a collection of annotated images that enable the
development and evaluation of models for automated violence detection in images. By
leveraging images from multiple sources, the dataset aims to cover a wide range of
violent content, enhancing the model's ability to detect violence accurately.
The "Login Using MetaMask" module provides a secure and convenient login
mechanism for users of the blockchain-enabled social networking application. It leverages the
MetaMask browser extension, which acts as a digital wallet and identity provider, to enable
users to authenticate themselves on the application.
With MetaMask installed in their browser, users can securely create and manage
Ethereum accounts, which serve as their digital identities within the blockchain network. The
module integrates MetaMask into the frontend of the application, allowing users to initiate the
login process by connecting their MetaMask account.
When a user clicks on the login button, the application prompts them to select their
MetaMask account. MetaMask securely handles the account authentication and private key
management, ensuring the safety of the user's credentials. Once the user approves the login
request, the application establishes a secure connection with MetaMask and retrieves the
necessary account information.
The retrieved account information, such as the Ethereum address associated with the
MetaMask account, is then used by the application to identify the user and grant them access
to their blockchain-enabled social networking account. This process ensures that only
authorized users with valid MetaMask accounts can log in to the application, enhancing
security and preventing unauthorized access.
The module's workflow begins with the preprocessing of text data, where it tokenizes
the input text into individual words and removes any irrelevant or stop words. Next, the
TFIDF vectorizer transforms the text data into numerical feature vectors, capturing the
significance of each word in relation to the entire dataset. This step enables the module to
analyze the textual content more effectively.
The logistic regression model, trained on labeled data, is then employed for
classification. The one-vs-rest approach allows the model to handle multiple classes
simultaneously, determining the probability of each class for a given input text. Based on
these probabilities, the module identifies the most appropriate class label for the text, enabling
accurate content filtering.
To facilitate integration with the social media platform, the module provides a RESTful
API implemented using Flask. The API receives text inputs from users and returns the
predicted class label and associated probabilities, allowing the platform to take appropriate
actions based on the content classification results.
For deployment and accessibility, the module utilizes ngrok, a secure tunneling service
that provides public URLs for local development environments. This enables seamless
integration of the text filtering module into the social media platform, ensuring real-time
content moderation and a safer user experience.
The CNN model is constructed with multiple layers to effectively extract features from
the input images. The architecture consists of convolutional layers, max-pooling layers, and
dense layers. The model begins with a 3x3 convolutional layer with 16 filters, followed by a
max-pooling layer to downsample the feature maps. This process is repeated with a 3x3
convolutional layer with 32 filters and another max-pooling layer. Finally, a 3x3
convolutional layer with 16 filters is added, followed by a max-pooling layer.
After the convolutional layers, the feature maps are flattened to a 1D vector and passed
through a dense layer with 256 neurons, using the ReLU activation function to capture higher-
level representations. A dropout layer is incorporated to prevent overfitting by randomly
setting a fraction of the input units to 0 during training. The final dense layer with a single
neuron and the sigmoid activation function is used for binary classification, providing a
probability score for the presence or absence of the target class.
The module is integrated with a Flask framework to create an API that handles image
uploads and returns the classification results. The API receives images from users and
processes them through the trained CNN model. The model predicts the probability of the
target class, which can be interpreted as the likelihood of the image belonging to a specific
category. The classification results are then returned through the API, allowing the social
media platform to take appropriate actions based on the image classification.
For deployment and accessibility, the module utilizes Railway, a platform for hosting
web applications. This ensures that the image classification module is easily accessible and
can seamlessly integrate with the social media platform, providing real-time image
classification capabilities.
Users can create and publish posts on the social networking application.
The post content, including text, images is stored securely on the blockchain.
Users can view posts created by themselves and other users on the platform.
Users can view and interact with comments associated with posts.
Following Users:
Users have the ability to follow other users within the social networking
application.
Chat Messaging:
Users can engage in chat messaging with other users on the platform.
Users have the ability to manage and update their user profile information.
Profile details such as username, bio, profile picture, and other relevant
information are stored on the blockchain.
Users can modify their profile information and view the profiles of other users.
Users can explore and view posts from other users on the social networking
application.
Users can like, comment, or engage with posts from other users.
Users can upload images as part of their posts within the social networking
application.The module securely stores these images in IPFS, which breaks them down into
smaller chunks and distributes them across the IPFS network.After uploading an image, the
module generates a unique hash value (content identifier) for each image file.
Blockchain Integration:
The generated hash for each image is stored on the blockchain, associated with the
corresponding post or user profile.Storing the hash on the blockchain provides a reference to
the image stored in IPFS and ensures the immutability and traceability of the image's location.
Image Retrieval:
When users view posts or profiles that include images, the module retrieves the
corresponding hash from the blockchain.Using the hash, the module interacts with the IPFS
network to fetch the image from the distributed storage.Users can seamlessly view the images
associated with posts or user profiles, enhancing their visual experience within the
application.
By leveraging IPFS for image storage, the module provides a robust and decentralized
solution, ensuring the availability and integrity of images posted within the blockchain-
enabled social networking application. The combination of blockchain and IPFS technologies
allows for a seamless integration of image storage and retrieval, offering users a reliable and
efficient way to share and view images within the application.
Remix IDE: An integrated development environment for writing and compiling smart
contracts.
Ganache: A tool for creating a local blockchain for testing smart contracts.
Sepolia (or other Ethereum-based testnet): A test network for deploying and testing
smart contracts in an environment similar to the mainnet.
Polygon: A scaling solution and Ethereum sidechain, suitable for deploying smart
contracts and DApps.It provides more increased speed and more transaction
IPFS (InterPlanetary File System): A decentralized and distributed file storage system
used for storing images.
MetaMask: A browser extension that allows users to interact with the Ethereum
blockchain from the frontend.
Ethers.js: A library for interacting with Ethereum smart contracts, simplifying the
process of sending transactions and retrieving data.
Python: A versatile language for data manipulation, analysis, and machine learning.
Pandas: A data manipulation library for handling and analyzing structured data.
Scikit-Learn: A machine learning library that provides tools for classification and
evaluation.
NLTK (Natural Language Toolkit): A library for natural language processing tasks.
Flask: Web framework for building web applications in Python. It is used to create API
TensorFlow: An open-source deep learning framework for building and training neural
networks.
Keras: A high-level neural networks API that can work with TensorFlow as its backend.
Open Images V6 dataset: A dataset containing labeled images, including violent and
non-violent categories.
ImageNet dataset: A large-scale image dataset widely used for training deep learning
models.
Iris dataset: A dataset of flower images, potentially used to improve the model's
classification capabilities.
Flask: Web framework for building web applications in Python. It is used to create API
5.8.7 Deployment:
Netlify: A popular cloud platform for deploying static websites and frontend
applications.
Railway: A deployment platform for machine learning models, which allows hosting
and serving trained models.
Local System with NGrock: A tool that creates a secure tunnel to expose your local
server to the internet, allowing you to deploy and access your text filtering model
locally.
Alchemy: It is utilized for interacting with the Ethereum blockchain. Alchemy provides
developers with powerful APIs and developer tools that simplify the process of building
blockchain applications.
Thunderclient: It is a testing tool to validate and assess the functionality of the API
endpoints.
Remix, Hardhat: Tools for compiling, testing, and deploying smart contracts.
The project utilizes the Ethereum blockchain for its decentralized and secure data
storage and retrieval capabilities. The connection between the project and the Ethereum
blockchain is established through the use of smart contracts.
Smart contracts are self-executing contracts with the terms of the agreement directly
written into code. In this project, smart contracts are deployed on the Ethereum blockchain to
define the logic and functionality of the application. The smart contracts facilitate interactions
and transactions between different entities within the system.
To connect with the deployed smart contracts on the Ethereum blockchain, the project
utilizes the Ether.js library. Ether.js is a JavaScript library that simplifies the process of
interacting with Ethereum smart contracts. It provides a set of APIs and utilities for sending
transactions, calling contract functions, and retrieving data from the blockchain.Using Ether.js,
the project establishes a connection with the Ethereum network by specifying the network
provider and the address of the deployed smart contract. Once the connection is established, the
project can interact with the smart contract by calling its functions and sending transactions.
Ether.js provides convenient methods for encoding function calls, signing transactions, and
handling the communication with the Ethereum network. It abstracts away the complexities of
low-level Ethereum interactions and provides a higher-level interface for developers to interact
with the smart contract.
By leveraging Ether.js and connecting with the address of the deployed smart contract,
the project can securely and efficiently interact with the Ethereum blockchain. It can read data
from the smart contract, update its state, and perform transactions based on the defined contract
functions. This integration with the blockchain enables the project to leverage the decentralized
and immutable nature of Ethereum for reliable data storage and retrieval.
The code segment provided below demonstrates the interaction with a deployed smart
contract using ethers.js.
This line declares a constant variable contractAddress and assigns it the value of the Ethereum
address where the smart contract is deployed. This address uniquely identifies the deployed
contract on the blockchain.
This line declares a constant variable contractAbi and assigns it the value of the contract's ABI
(Application Binary Interface). The ABI defines the interface and functions of the smart
contract, allowing the interaction with its methods.
This line creates a new instance of the Web3Provider class from ethers.js. It uses the ethereum
object provided by the browser's Ethereum provider to connect to the Ethereum network. The
provider instance will be used to send transactions and interact with the deployed contract.
This line retrieves the Signer object from the provider. The Signer is responsible for
signing Ethereum transactions and authenticating the user. In this case, it gets the Signer
associated with the current Ethereum account.
This line creates a new instance of the Contract class from ethers.js. It represents the deployed
smart contract and provides an interface to interact with its functions and properties.
The contract object is instantiated by passing the contract address, ABI, and Signer
object.
The code to integrate IPFS into the frontend using the web3.storage library allows to
upload files to IPFS and retrieve the Content Identifier (CID).
An instance of Web3Storage is created using the API token.The script collects all the
files from the provided paths using the getFilesFromPath function and stores them in the files
array.The number of files to be uploaded is logged.
The put method of Web3Storage is called with the files array to upload the files to
IPFS. It returns a CID (Content Identifier) representing the uploaded content.The CID is logged
to the console.The main function is executed.
To run the script, provide the API token using the --token flag, followed by the paths of
the files or directories you want to upload.
Replace <YOUR_TOKEN> with your actual API token, and provide the paths of the files or
directories you want to upload. The script will output the CID associated with the uploaded
content.
Chapter 6
TESTING
6.1 Test Case of Module 1 (Login Using MetaMask)
To conduct manual testing for the login module using MetaMask, several steps need to
be followed to ensure the proper functioning and integration of the application with the
MetaMask extension.
First, it is important to verify that the application's user interface includes a login option
specifically for MetaMask. This ensures that users are aware of the MetaMask integration and
can easily access the login functionality.
When testing the MetaMask connection, clicking on the MetaMask login option should
trigger the MetaMask extension to open and prompt the user for authorization. It is crucial to
ensure that the authentication process occurs seamlessly and that the user is able to authenticate
with MetaMask using their selected account.
During the authentication process, the application should request the necessary
permissions from MetaMask, such as access to the account address and network. After
authorization, the application should retrieve the connected MetaMask account address
accurately. This address should match the authorized MetaMask account, confirming the
successful connection and authentication.
The NLP (Natural Language Processing) model developed for profanity detection and
filtering underwent comprehensive testing to assess its performance and accuracy. The model
was designed to check, filter, and classify text based on profanity content using logistic
regression and TFIDF vectorization techniques.
During the testing phase, the model's performance was evaluated using a separate
testing dataset that accounted for 20% of the total available data. The results of the testing
process provided valuable metrics to gauge the model's effectiveness in detecting and filtering
profanity in text.
Result shown in table.xx these results demonstrate the effectiveness of the NLP model
in accurately identifying and filtering profanity in text content. The high accuracy of 0.97
signifies that the model has learned to differentiate between profane and non-profane text with
remarkable precision. The precision score of 0.91 indicates a low false-positive rate, ensuring
that the model correctly identifies profanity without flagging non-offensive text incorrectly.
To evaluate the performance of the model's API, Thunderclient was utilized it is shown
in image.xx. The API was tested using JSON-formatted data, with a response time of 680m
milliseconds recorded. This rapid response time ensures that users of the social media platform
can enjoy real-time profanity detection and filtering, enhancing their experience and
maintaining a respectful online environment. In conclusion, the extensive testing conducted on
the NLP model demonstrated its high accuracy and reliability in detecting and filtering
profanity in text. The model's performance metrics, combined with the efficient API response
time, guarantee an effective and seamless user experience. By employing logistic regression
and TFIDF vectorization techniques, the model achieves a superior level of accuracy, precision,
recall, and F1 score, making it a valuable component of the social media platform's text
filtering module.
During testing, the model achieved an impressive accuracy of 98%. This indicates that the
model performs exceptionally well in correctly classifying images based on the trained
categories. The testing dataset was carefully prepared and comprised 20% of the total available
data, ensuring a diverse and representative set of images for evaluation.
Model Accuracy
Pre Trained Model ( eg. VG16 ) 0.89
In addition to accuracy, the performance of the model was assessed to measure its
efficiency in processing requests. To evaluate the response time of the model's API, a tool
called Thunderclient was employed. The API was tested using a data size of 43 bytes. The
results demonstrated that the API provided an output in an impressive time of 986 milliseconds.
This quick response time ensures that users can experience smooth and efficient image
classification within the social media platform. The result of API testing is shown below
This test ensures that users can follow other users on the SocialMedia contract.
It deploys the contract and obtains the addresses of two users (owner1 and owner2).
It updates the profile of owner1 using the updateProfile function.
Then, it follows owner2 using the follow function and retrieves the list of followers for
owner1.
Finally, it verifies that the follower's public key matches owner2's
This test checks the validation to prevent a user from following themselves.
It deploys the contract and obtains the addresses of two users (owner1 and owner2).
It verifies that trying to follow owner1 with their own address reverts with the error
message "Cannot follow yourself."
This test ensures that a user must update their profile before following others.
It deploys the contract and obtains the addresses of two users (owner1 and owner2).
It verifies that trying to follow owner2 without updating owner1's profile reverts with the
error message "Update profile first."
It deploys the contract and obtains the addresses of two users (owner1 and owner2).
It updates owner1's profile using the updateProfile function.
Then, it follows owner2 using the follow function.
It sends a message from owner1 to owner2 with a specific message content.
After sending the message, it retrieves the message from owner2's inbox and verifies that
the message content matches the provided value.
The test case verifies that the post content of the user's post matches the post content
fetched earlier.
The creation and accessibility of URLs for accessing the stored images should also be
tested. By obtaining the CID of the uploaded file, the URL should be generated following the
specified format. Clicking on the URL or accessing it through a web browser should display
the associated image correctly. The URL should remain functional even after multiple accesses
or refreshes.
To handle duplicate image storage, the testing process should confirm that storing the
same image multiple times results in the same hash being returned. By uploading a previously
stored image and comparing the hash with the previously recorded hash, it should be ensured
that both hashes are identical, indicating consistency in the storage process.By conducting
comprehensive manual testing based on these test cases, the implementation of IPFS storage
using Web3Storage can be validated. It ensures the successful storage of files on IPFS,
Creation of hashes, proper creation and accessibility of URLs for accessing images, consistency
in duplicate image storage.
Chapter 7
SAMPLE CODE
if(currentAccount){
return 1}
const accounts = await ethereum.request({ method: "eth_requestAccounts", });
setCurrentAccount(accounts[0]);
const contractAddres = process.env.REACT_APP_BLOCKCHAIN;
const contractAbi = abi.abi;
const provider = new ethers.providers.Web3Provider(window.ethereum)
const signer = provider.getSigner();
const contract = new ethers.Contract(contractAddres,contractAbi,signer);
setState({provider,signer,contract})
return 1;
}
const filterText = async
(input,flag=0)=>{ let data;
if(flag===1){
data = {
comment:input
}}
else{ dat
a={
comment:input.post_text
}}
const result = axios.post(`${process.env.REACT_APP_ML}/media/text`,data)
return result
}
const filterImage = async (input)=>{
const formData = new FormData()
formData.append('file', input[0]);
const response = await fetch(`${process.env.REACT_APP_ML}/media/image`,
{ method: 'POST',
mode: 'cors',
body: formData
});
const result = await response.json()
return result
}
useEffect(() => {
checkIfWalletIsConnect();
// eslint-disable-next-line
},[]);
return (
<SocialMediaContext.Provider
value={{
connectWallet,
currentAccount,
filterText,
state,
post,
setPost,
isVisible,
setIsvisible,
openView,
openViewModal,
closeViewModal,
openPeople,
openPeopleModal,
closePeopleModal,
filterImage,
filterFeedback,
setFilterFeedback,
progress,
setProgress
}}
>
{children}
</SocialMediaContext.Provider>
);}
function Chatting() {
const { state,currentAccount } = useContext(SocialMediaContext);
const { contract } = state
const { userId } = useParams();
const [chatUser,setChatUser] = useState(null)
const [chatText,setChatText] = useState("")
const [chats,setChats] = useState(null)
const handleChange = (e)=>{
setChatText(e.target.value)
}
const handleSubmit = async()=>{
await contract.sendMessage(userId,chatText);
setChats([...chats,{
sender:1,
message:chatText
}])
}
const myStyle = {
backgroundColor:"#666585",
color:"white",
borderBottomRightRadius:"8px",
borderTopLeftRadius: "8px",
borderBottomLeftRadius: "8px",
}
const myStyle1={
alignItems:"flex-end"
}
const senderStyle = {
backgroundColor:"#505050",
color:"white",
borderTopRightRadius:"8px",
borderTopLeftRadius: "8px",
borderBottomRightRadius: "8px",
}
const senderStyle1={
alignItems:"flex-start"
}
useEffect(()=>{
const getProf = async()=>{
const fri = await contract.getProfile(userId)
const chats = await contract.readMessage(userId)
setChats(chats)
setChatUser(fri)
}
contract && getProf()
},[contract])
return (
<>
<SideBar />
<div className="user_chat_main_container">
<div className="user_chat_container">
<div className="user_chat_box">
<div className="user_top_part">
<img src={pic} alt="" />
<div className="names">
<p className='name'>{chatUser && chatUser.name}</p>
<p className='username'>@{chatUser && chatUser.username}</p>
</div>
</div>
<div className="chat_part">
{
chats && chats.length>0 && chats.map((ch,index)=>{
return(
<divclassName="chat_item"style={chatUser.userId!=ch.sender?myStyle1:senderStyle1}
key={index}>
<divclassName="chat_item_container" style={chatUser.userId!
=ch.sender?myStyle:senderStyle}>
<p className="sender">{chatUser.userId!=ch.sender?"you":chatUser.name}</p>
<p className="chat_text">{ch.message}</p>
</div>
</div>
)}) }
</div>
<div className="send_part">
<inputname="chat"type="text"placeholder='Sendamessage.....'value={chatText}
onChange={handleChange} />
<div className="send_chat" onClick={handleSubmit}>
<i className='bx bxs-send'></i>
</div></div></div></div></div>
</>)}
export default Chatting
function Postitem(props) {
const [isVisible,setIsVisible] = useState(false)
const handleShowComment = (e)=>{
const ele=e.target.parentElement.parentElement.parentElement.querySelector
(".comments_main_container");
const visibility = ele.getAttribute('data-visible');
if (visibility === "true") {
ele.setAttribute("data-visible", false);
setIsVisible(false)
}
else{
ele.setAttribute("data-visible", true);
setIsVisible(true)
}}
return (
<div className="post_item">
<div className="user_details">
<img src={pic1} alt="" />
<div className="name_details">
<p className="username">@{props.user.username}</p>
<p className="name">{props.user.name}</p>
</div>
</div>
<div className="post_contents">
<span>{props.postText}</span>
<img src={props.pic?`https://${props.pic}`:""} alt="" />
</div>
<div className="post_controls">
<div className="like">
<i className="bx bx-like" />
<span>Like</span>
</div>
<div className="comment" onClick={handleShowComment}>
<i className="bx bx-message-dots" />
<span>Comment</span>
</div>
<div className="share">
<i className='bx bx-share'></i>
<span>Share</span>
</div>
</div>
<div className="comments_main_container" data-visible="false">
{isVisible && <CommentMainItem postId={props.postId} />}
</div>
</div>
)}
export default Postitem
function Profile() {
const { state,currentAccount } = useContext(SocialMediaContext); const
{ contract } = state
const [disableInput,setDisableInput] = useState(true) const
[inputVal,setInputVal] = useState("")
const handleEnable=()=>{
setDisableInput(false)
}
const handleChange=(e)=>{
setInputVal(e.target.value)
}
const handleUpdate=async ()=>{
contract && await contract.updateProfile(inputVal,currentAccount);
}
useEffect(()=>{
const getProf = async()=>{
const prof = await contract.getProfile(currentAccount)
setInputVal(prof.accountCid)
}
getProf()
// eslint-disable-next-line
},[contract]) return (
<>
<SideBar />
<div className="profile_container">
<div className="profile_content">
<div className="edit" onClick={handleEnable}>
<i class='bx bxs-edit-alt'></i>
</div>
<inputvalue={inputVal}type="text"name="username"disabled={disableInput} className='profile_name'
onChange={handleChange} />
<button onClick={handleUpdate}>Update</button>
</div>
</div>
</>)}
export default Profile
function UserPost() {
const { state,currentAccount } = useContext(SocialMediaContext);
const { contract } = state
const [userposts,setUserposts] = useState([]);
useEffect(()=>{
const getUserPost = async()=>{
const uposts = await contract.viewUserPost(currentAccount)
setUserposts(uposts)
}
contract && getUserPost()
},[contract])
return (
<>
<Navbar />
<SideBar />
<div className="userpost_container">
<h1>Your Posts</h1>
<div className="userpost">
{
userposts.length>0?userposts.map((post,index)=>{
return (
<div className="userpost_item" key={index}>
<div className="userpost_img">
<img src={`https://${post.imgUrl}`} alt="" />
</div>
<p>{post.postText}</p>
</div>)
}):<p className='nopost'>You have not posted yet</p>}
</div>
</div>
</>)}
export default UserPost
return (
<>
<div className="modal" id="modal">
<div className="modal_top">
<h1 className='modal_title'>Add Post</h1>
<div className="close-icon" onClick={handleModalClose}>
<istyle={{cursor:"pointer",color:"red"}}className='bxbx-x-circlebx-sm'
onClick={handleModalClose}></i>
</div>
</div>
<div className="modal_body">
<div className="modal-container">
<textareaclassName='post_text' name="post_text" id="" rows="5" onChange={handleChange}
placeholder="Enter your post" ></textarea>
<input type="file" onChange={getFiles} />
</div>
<button className='modal-qstn-submit' onClick={handleSubmit}>Submit</button>
</div>
<Message />
{enableProgress && <ProgressBar />}
</div>
<div id="overlay" onClick={handleModalClose}></div>
</>
)
</div></div>
<div className="btn">
Following
</div></div>
)})}
</div></div>
</>)}
7.1.8 Comment section
function CommentMainItem(props) {
const { state,filterText,setFilterFeedback } = useContext(SocialMediaContext);
const { contract } = state
const [commentData,setCommentData] = useState(null)
const [comment,setComment] = useState("");
const handleCommentChange=(e)=>{
setComment(e.target.value)
}
const handlePostComment= async
()=>{ const res = await
filterText(comment,1) if(res.data.msg==1)
{
contract && await contract.addComment(props.postId-1,comment);
window.location.reload()
}
else{ setFilterFeedba
ck({ isEnable:true,
msg:"Violence content found in your comment..."
})}}
useEffect(()=>{ document.querySelector(`.input_comment$
{props.postId} input`).focus() const getComments = async()=>{
const comments = await contract.viewPostComment(props.postId-1);
setCommentData(comments)
}
getComments()
// eslint-disable-next-line
},[contract])
return (
<>
<div className="add_comment">
<div className="user_img">
<img src={pic1} alt="" />
</div>
<div className={`input_comment input_comment${props.postId}`}>
<input type="text" placeholder="Add a comment..." value={comment}
onChange={handleCommentChange} />
<div className="send_comment" onClick={handlePostComment}>
<i className='bx bxs-send'></i>
</div></div></div>
<Message />
<div className="comments_container">
<div className="comments_part">
{commentData && commentData.map((ele,index) =>
{ return <CommentItem key={index} cmt={ele} />
})}
</div></div>
</>
)}
def make_test_predictions(df):
df.comment_text = df.comment_text.apply(clean_text)
X_test = df.comment_text
X_test_transformed = td.transform(X_test)
y_test_pred =
mp.predict_proba(X_test_transformed) result =
sum(y_test_pred[0])
if result >=1 :
return 1
else :
return 0
app = Flask( name )
CORS(app)
@app.route("/media/text", methods=['POST'])
def sanitize():
val = request.get_json()
val = val['comment']
comment_text = val
comment ={'comment_text':[comment_text]}
comment = pd.DataFrame(comment)
result = make_test_predictions(comment)
if(result==0):
return(jsonify({"msg": 1}))
else:
return(jsonify({"msg":
0})) if name == ' main
':
app.run()
model = tf.keras.models.load_model('imageclassifier.h5')
ALLOWED_EXTENSION = set(['jpeg','jpg','png'])
def allowed_file(filename):
filext = filename.split(".")
if filext[1] not in ALLOWED_EXTENSION:
return 0
return '.' in filename and filename.rsplit('.', 1)[1].lower() in ALLOWED_EXTENSION
app = Flask( name )
CORS(app)
@app.route('/media/image', methods=['POST'])
def upload_media():
if 'file' not in request.files:
return jsonify({'error':'media not provided'}),400
file = request.files['file']
if file.filename == '':
return jsonify({'error':'no file selected'}),400
if allowed_file(file.filename)==0:
return jsonify({'msg':2})
if file and allowed_file(file.filename):
filename = secure_filename(file.filename)
img_content = file.read()
img = imageio.imread(img_content)
resize = tf.image.resize(img, (256,256))
pred = model.predict(np.expand_dims(resize/255, 0))
if pred > 0.6:
x=1
else:
x=0
if x==0:
return jsonify({'msg':1})
else:
return jsonify({'msg':0})
if name == ' main ':
app.run(debug=True, port=5000)
string name;
string username;
}
struct message
{ address sender;
uint256 timestamp;
string message;
}
struct userPicture {
string pictureCid;
}
address [] users;
Comment[] comments;
userPost[] public post;
mapping(address => friend[]) public _followers;
mapping(address => userAccount) public _account;
mapping(address => userPicture) public _picture;
mapping(bytes32 => message[]) allMessages;
function getChatcode(address pubkey1,address pubkey2) internal pure returns(bytes32){
if(pubkey1<pubkey2){
return keccak256(abi.encodePacked(pubkey1,pubkey2));
}
else return keccak256(abi.encodePacked(pubkey2,pubkey1));
}
function sendMessage(address friendkey, string calldata _msg) external
{ bytes32 chatcode = getChatcode(msg.sender,friendkey);
message memory newMsg = message(msg.sender,block.timestamp,_msg);
allMessages[chatcode].push(newMsg);
}
function readMessage(address friendkey) external view returns (message[] memory)
{ bytes32 chatcode = getChatcode(msg.sender,friendkey);
return allMessages[chatcode];
}
function updateProfile(string memory user_name,string memory uname,string memory
flag:1,
follow_count:0
});
}}
function getProfile(address wallet) public view returns(userAccount memory)
{ return _account[wallet];
}
function getAllProfiles() public view returns(userAccount[] memory)
{ userAccount [] memory accs=new userAccount[](user_count);
for(uint i=0; i<user_count; i++){
accs[i] = _account[users[i]];
}
return accs;
}
function follow(address wallet,address follower) public
{ require(wallet!=follower,"Cannot follow yourself");
require(_account[msg.sender].flag==1,"Update profile first");
for(uint i=0; i<_account[msg.sender].follow_count; i++){
if(_followers[wallet][i].pubkey==follower){
revert("Already followed");
}}
string memory fname = _account[follower].name;
string memory uname = _account[follower].username;
friend memory newFriend = friend(follower,fname,uname);
_followers[wallet].push(newFriend);
}
function getFollowers(address wallet) public view returns(friend[] memory)
{ return _followers[wallet];
}
function updatePicture(string memory newPCid,address wallet) public
{ require(wallet==msg.sender,"Access denied");
_picture[wallet] = userPicture({
pictureCid:newPCid
}); }
uint256 matchingPostCount = 0;
for (uint256 i = 0; i < post_count; i++)
{ if (post[i].userId == wallet) {
matchingPostCount++;
}}
userPost [] memory psts=new userPost[](matchingPostCount);
uint256 currentIndex = 0;
for(uint i=0; i<post_count; i++){
if(post[i].userId==wallet)
{ psts[currentIndex]=post[i];
currentIndex++;
} }
return psts;
}}
expect(prof2.flag).to.equal(1);
})
it("Allow users to view their posts",async function()
{ const [owner] = await ethers.getSigners()
const socialmedia = await ethers.getContractFactory("SocialMedia")
const hardhatSocialmedia = await socialmedia.deploy()
const postContent = "Hello how are you"
await hardhatSocialmedia.addPost(owner.address,postContent,"img hash");
const post = await hardhatSocialmedia.viewPost();
console.log("\npost saved to blockchain : ",post[0],"\n");
const userpost = await hardhatSocialmedia.viewUserPost(owner.address);
// Verify the post details
expect(userpost[0].postText).to.equal(post[0].postText);
}) })
Chapter 8
RESULT AND ANALYSIS
8.2 Analysis
Blockchain technology has the potential to transform the way social media platforms
operate, by providing new levels of transparency, security, and decentralization. By leveraging
blockchain, this platform provides a decentralized and secure environment that allow users to
control their data, interact with each other in a secure and transparent way. The use of IPFS
along with blockchain solves the limited storage of the blockchain and helps to reduce cost and
improve scalability. The use of Polygon allowed to scale the blockchain application. The
Polygon is a multichain solution which offer better scalability. It operates as a sidechain to the
Ethereum blockchain, which means that it is a separate blockchain that is still connected to the
main Ethereum network. One of the key advantages of using Polygon is that it can
significantly reduce transaction fees and increase transaction speed. This is achieved by
offloading many of the processing tasks from the Ethereum mainnet to the Polygon network,
which operates on a separate infrastructure. This reduces the burden on the Ethereum network
and allows transactions to be processed faster and more efficiently. The Ethereum blockchain
has high transaction cost and slow transaction processing. It could only process 13-15
Transactions Per Second (TPS). Polygon can process all the way up to 65000 TPS. The
average transaction fee on Ethereum is about $6.33 and Polygon can reduce it to less than $0.
Chapter 9
CONCLUSION
The project focuses on the development of a social media platform that harnesses the
power of blockchain technology.
The blockchain that we choose for our platform is Ethereum. The smart contracts for
the Ethereum blockchain was developed on solidity which is a programming language for the
Blockchain. The smart contract allows you to write various functionalities to store and retrieve
user data while ensuring transparency, immutability, and security of user data. The smart
contract was tested first on a local blockchain using 'ganache'. For compiling and testing the
contracts a tool called 'hardhat' was used. Once fully tested the smart contract was deployed on
Ethereum blockchain. The integration of blockchain technology provided secure storage and
access to user data while facilitating transparent and auditable transactions.
Finally the Blockchain and ML functionalities were integrated to the frontend to unlock
the full potential of the platform which filtered harmful and violence content posted in the
platform.
Chapter 10
FUTURE ENHANCEMENT
The blockchain-based social media platform provides users with a secure, reliable,
transparent, and traceable environment. Unlike traditional social media platforms, this platform
is designed to be user-centric, where users have ownership and control over their data.
One such rollup solution is zk-rollup, also known as zero-knowledge rollups. These
rollups utilize zero-knowledge proofs to bundle multiple transactions off-chain and present a
concise proof to the main blockchain. These cryptographic proofs guarantee the validity of all
transactions within the rollup, ensuring security and trust without disclosing specific
transaction details.
Another rollup approach is optimistic rollups, which employ optimistic state execution.
In this method, transactions are processed off-chain with minimal validation. It assumes that
transactions are valid unless proven otherwise through fraud proofs. This approach enables
faster transaction processing and reduces the computational load on the main blockchain.
Looking ahead, the platform has the potential to introduce a token economy that
incentivizes users for sharing content and making contributions. This token-based incentive
system can foster user engagement and participation within the platform. Revenue generation
strategies such as advertising, premium subscriptions, sponsored content, or in-app purchases
can be included in this platform.
Chapter 11
REFERENCES
[1] L. Jiang and X. Zhang, "BCOSN: A Blockchain Based Decentralized Online Social
Network," in IEEE Transactions on Computational Social Systems, vol. 6, no. 6, pp. 1454-
1466, Dec. 2019, doi: 10.1109/TCSS.2019.2941650.
[2] F. Yang, Y. Wang, C. Fu, C. Hu, and A. Alrawais, "An Efficient Blockchain-Based
Bidirectional Friends Matching Scheme in Social Networks," in IEEE Access, vol. 8, pp.
150902 - 150913,2020,doi:10.1109/ACCESS.2020.3016986.
[3] J. Kim and A. Yun, " Secure Fully Homomorphic Authenticated Encryption, " in
IEEE Access, vol.9, pp. 107279-107297, 2021, doi: 10.1109/ACCESS.2021.310084.
[4] S.-C. Cha, T.-Y. Hsu, Y. Xiang, and K.-H. Yeh, ‘‘Privacy enhancing technologies in the
Internet of Things: Perspectives and challenges,’’ IEEE Internet Things J., vol. 6, no. 2, pp.
2159–2187, April 2019.
[5] B. Jeon, S. M. Ferdous, M. R. Rahman, and A. Walid, "Privacy Preserving decentralised
aggregation for federated learning," in Machine Learning (cs.LG); Distributed, Parallel, and
Cluster Computing (cs.DC),Dec. 2020,doi: arXiv:2012.071836.