DevSecOps Project
DevSecOps Project
project
Team: Supervisor:
2
s,
an
éD d
Ku plo
be ie
D rn me
et n
fo epl es t l
r c oy
o
nt a o ca
ai ppl la
ne ic
r o ati ve
o
rc n
he s l c
st oc
ra al
tio ly
n. us
3
in
g
Ex Ku
be
Gi ten rn
tO s et
es
Im
p io
s n
de pl du
cl m e
ar e CI
at nt
iv G pi
e it pe
in O lin
fra ps
st fo
ru r
e
ct au av
ur to ec
e m
4
up at
da ed
Op te an
s. d
VC im t
S i
En et sat
co ha du ion
nt nc
ro e CI et
l a pe Pi Sé
nd r f pe cu
CI orm lin ri
pi an
p c e s
The Presentation Plan
el e at
in an io
es d
. se
n
du
5
cu
re
ve
Ob rs
se ion
rv
In ab
sy teg ili
st ra té
em te
t
pe oo
et
rf ls M
or to on
m ito
an mo
ce ni rin
an tor g
d an
re d
lia a
bi na
lit ly
y. ze
Section 1: CI Pipeline
● Faster
● Safer
● Simplification & Standardization
● Visualization of the process
Gitlab
Next, we will quickly run through the content of our DockerFile and makes sure everything is
perfectly lined up
Remember our
Well that’s the .gitlab-ci.yml file.
friend over here?
Writing the password using -p directly in the command ( as displayed in the first line) exposes it in plain text,
which is a security risk. This method makes the password visible in the command history and process list,
potentially compromising the credentials. A more secure approach is to pass the password via environment
variables and use --password-stdin, which avoids exposing sensitive information in plain text while still allowing
the login process to authenticate securely.
The final stage in our pipeline
The docker-deploy job runs in the deploy stage. It executes a Docker command to start a container in
detached mode (-d) using the omarelazhar/microservices-produit image. The container's port 9001 is
mapped to port 9001 on the host (-p 9001:9001), and the container is named microservices-produit.
Next, we will test the pipeline to ensure that the deployment was successful and the application is
accessible as expected. This will help verify that the entire pipeline—from build to deployment—works
correctly.
Checking the results
Is that
it?
What’s next?
“Container Orchestration”
Kubernetes
Obviously, no
containers are
running yet.
Prerequisites
Since we are working as a team and each of us has specific tasks to complete,
everyone has installed Minikube on their own machines and set up their local
Kubernetes clusters. This ensures that we can work independently while
staying aligned. The steps we followed for the setup are explained in detail
later.
Creating a local cluster
In Kubernetes, a namespace is a logical partition within a cluster that allows for the
isolation and organization of resources. It is particularly useful when multiple teams
or projects share the same cluster, as it prevents resource conflicts by grouping
related resources (such as pods, services, and deployments) under a unique identifier.
These files are necessary because they allow us to declaratively manage and
maintain our application’s deployment and networking within the Kubernetes
cluster.
deployment.yaml
Now that the files have been created, all that’s left is to apply them in our local
cluster using the kubectl apply -f filename.yaml command:
Verifying the changes
Now we verify the creation of the deployment, the replicaSet and the
service using the kubectl command:
Section 3: Extension du CI pipeline avec GitOps
GitOps is not an all-in-one product, plugin, or platform. There is no universal answer to this question; indeed, the
best way for teams to practice GitOps depends on their specific needs and goals.However, GitOps requires three
basic components :
Infrastructure as code IaC:
GitOps uses a Git repository as a single source of truth for defining infrastructure.
Merge request:
GitOps uses merge requests or push requests as the change mechanism for all infrastructure updates.
CI/CD:
GitOps automates infrastructure updates using a Git workflow with continuous integration and delivery (CI/CD).
When new code is merged, the CI/CD pipeline applies that change to the environment.
Section 3: Extension du CI pipeline avec GitOps
What does a GitOps workflow look like?
infrastructure).
2- The team reviews and merges the change into the main
branch.
There are many tools available, each with its strengths and specific use cases, but after evaluating our project’s
needs and objectives, ArgoCD is the best choice.
ArgoCD:
Automated Deployments:
When an update is pushed to Git, ArgoCD can automatically
apply these changes to the Kubernetes cluster in real time.
Declarative Management:
All infrastructure and applications are managed declaratively,
ensuring consistency across environments.
Section 3: Extension du CI pipeline avec GitOps
Creating a dedicated namespace for ArgoCD ensures clean, secure, and organized management of its resources
within the Kubernetes cluster. It is a crucial step to follow best practices for Kubernetes deployment.
Section 3: Extension du CI pipeline avec GitOps
ArgoCD Installation :
Now, we will apply the Argo CD installation manifest to deploy Argo CD in the 'argocd' namespace. This manifest
installs all the necessary components of Argo CD, such as controllers and services.
The ArgoCD installation manifest is a configuration file that contains all the necessary instructions to deploy
ArgoCD and its components in a Kubernetes cluster. It ensures that everything is properly configured and aligned
with the requirements, simplifying the installation and management of ArgoCD.
Section 3: Extension du CI pipeline avec GitOps
ArgoCD Installation :
Accessing the Argo CD Web Interface:
So in this stage, we will forward the port of the Argo CD service to our local machine to access its web interface.
Decoding the Password: The password is encoded in Base64. To display it in plain text, we need to decode it.To
decode, we will use the following command:
Section 3: Extension du CI pipeline avec GitOps
Access and Configuration:
Now,we can start configuring Argo CD to sync our Kubernetes application with the Git repository and
manage the deployment of our application via GitOps.
To make ArgoCD work, we need to create a file (application.yaml) on which it will base its
operation.There are two methods to create the application manifest application.yaml. The first method
is to go directly to the Git repository and create the application.yaml file. The second method is to use
Git via the CLI, clone the repository, create the file, and then push it to the main branch
Section 3: Extension du CI pipeline avec GitOps
3 Access and Configuration:
After creating the application.yaml file, which is used to configure our application in Argo CD, we can apply the
application.yaml file to our Kubernetes cluster through Argo CD.
We can verify if our ArgoCD application is functioning correctly using the command-line interface (CLI).
We wait for ArgoCD to automatically synchronize the changes. If automatic synchronization is configured, the change will be
applied automatically.
Expected Result
Section 4: Optimizing and securing the VCS
and the CI pipeline
The optimizing and securing the VCS And the CI pipeline phase ensures an efficient and secure
development lifecycle by optimizing CI/CD pipelines with custom runners, caching, reusable jobs, and
controlled job execution. It strengthens security through branch protection, commit signing, and
automated testing (SAST, SCA, DAST), while implementing granular permissions and vulnerability
thresholds to prevent critical risks. Automated notifications and cleanup further streamline operations
and resource management.
Setup a custom Gitlab Runner
Using a custom GitLab runner instead of the shared ones is like swapping a ride on a public bus for a
private jet. On the public bus (a.k.a., the shared runner), you're just another passenger among many,
waiting your turn and hoping there's no traffic. But with your private jet (your custom runner), you
soar above the delays and inefficiencies, cruising in an environment that's tailored just for your
project's needs, with no unnecessary stops slowing you down.
Setup a custom Gitlab Runner
For this task, we choose to pick the "Regular" option with 4 GB / 2 CPUs, featuring a SSD disk
at $24/month. This choice provides us with robust performance and enhanced storage
capabilities, ideal for handling our project's more demanding processing and speed
requirements efficiently.
Setup a custom Gitlab Runner
In this section, we run the gitlab-runner with the command gitlab-runner run. The runner will then
wait for gitlab to send him a job to execute.
Here, the runner just received a job from Gitlab, and started the execution of it
Create rules to control pipeline
In GitLab CI, rules are essential because they dictate when and how specific CI jobs should run,
depending on the changes made in a project. This helps to optimize the usage of resources and ensures
that the pipeline executes relevant tasks only when needed. In this example, this jobs only run when the
source is a Merge Request or a Push.
Using cache with dependencies
In GitLab CI, caching is used to store specific files between jobs to reduce the build time. For this
example, the cache configuration caches the Maven dependencies located in the ~/.m2/repository
directory. This approach prevents Maven from re-downloading the same dependencies for every build.
Generic Jobs for a better reuse
Generic jobs in GitLab CI are useful because they provide a way to define common tasks that can be
reused across multiple projects or stages within a pipeline. This approach helps in maintaining
consistency, reducing duplication, and simplifying the maintenance of your CI configurations. In our case
we divided the jobs in gitlab-ci.yml into 6 files. Each file is a stage of the CI pipeline.
Add this to gitlab-ci.yml to import the generic Jobs The files for each stage of the pipeline
we created.
SECURITY SCANS
Software Composition Analysis
Software Composition Analysis acts like a vigilant code inspector for your software projects,
meticulously examining each open-source component and library for potential
vulnerabilities. It scours through your project's dependencies, cross-referencing each one
against databases of known security vulnerabilities.
snyk Retire.JS
Software Composition Analysis
Software Composition Analysis (SCA) - Snyk
We will use the pre-build.yml file to implement an SCA security test. To integrate Snyk
testing, the following modifications will be made:
severity: The severity level of the issue (low, medium, high, or critical).
Software Composition Analysis
Software Composition Analysis (SCA) - RetireJS
Using two SCA tools enhances vulnerability detection, reduces false positives/negatives, and provides more
comprehensive security coverage.
So, we will use the same pre-build.yml file to implement the second security test, which is Retire.js. In the
pre-build.yml, we will add a job to run Retire.js before the build process starts. The job installs Retire.js
globally, scans the project’s dependencies, and generates a JSON report with a list of high-severity
vulnerabilities. Key steps for this are:
RetireJS report
horusec semgrep
Static Application Security Testing
Horusec Security Test (SAST)
In the pre-build.yml file, we will add the a job to run Houtcs SAST test to perform a static analysis on our
code before the build stage.
This job uses Docker to run the Horusec CLI tool inside a container. The scan checks the project for security
vulnerabilities in our project . The results are output as a JSON file (result.json), which can be used for
further review or integration into other processes. The docker:dind service ensures that Docker commands
can be run inside the container, allowing Horusec to access and scan your project files.
Static Application Security Testing
Horusec Security Test (SAST)
A Horusec Security Test (SAST) report identifies vulnerabilities within source code by performing static analysis. This process evaluates the code
without executing it, providing insights into potential security flaws, their severity, and remediation suggestions. By addressing these issues early,
vulnerabilities can be resolved before deployment.
Semgrep report
Each vulnerability is tied to a file and line number and the report
categorizes vulnerabilities by severity (low, medium, high, critical)
Dynamic Application Security
Testing
Dynamic Application Security Testing (DAST) is akin to a stress test for your software, simulating
real-world attacks on your applications while they are running to identify vulnerabilities. DAST
tools interact with an application from the outside, probing it as an attacker would—by making
requests
Nikto Nmap
Dynamic Application Security Testingc
Nmap Security Test (DAST)
Nmap is a network scanning tool that performs port scanning, service version detection, and operating system
fingerprinting. In this job, Nmap is used to test the exposed services of a target application dynamically.
This job runs in the test stage of the pipeline to perform dynamic security checks on the running application.
Results from the scan are saved for 7 days, ensuring they are
accessible for further analysis.
The rules section determines when a job should run in the pipeline. It is configured to execute only for specific triggers,
such as when a merge request is created or code is pushed to the repository. The when: always directive ensures
that the job will run every time these specified triggers occur, ensuring consistent execution for those events.
Dynamic Application Security Testing
Nmap Security Test (DAST)
The report generated by an Nmap Security Test provides a comprehensive overview of the security posture of a target
system . Nmap, a popular open-source tool, conducts various tests to detect vulnerabilities and assess the security
risks associated with a network or application in real-time.
Nmap report
Artifacts: Scan results are stored for 7 days, ensuring they remain accessible for review or audits.
Rules: The job is configured to run only for merge request or push events, and when: always ensures it runs consistently for these triggers.
Dynamic Application Security Testing
Nikto Security Test (DAST)
A report generated by a Nikto Security Test provides an in-depth analysis of web application vulnerabilities. Nikto is a widely used
open-source tool that scans web servers for security flaws, misconfigurations, and outdated software that could be exploited by
attackers.
Nikto report
The report includes information on identified vulnerabilities
,security misconfigurations, and issues related to web server
settings or outdated software versions.
This report is crucial for ensuring that containerized applications are secure
before deployment, minimizing the risk of security breaches in production
environments.
Vulnerability management
DefectDojo is an open-source application vulnerability correlation and security
orchestration tool designed to streamline the security testing process by
automating vulnerability management and facilitating effective security program
management. It was developed with the intention of simplifying the complex task of
tracking and managing the multitude of security vulnerabilities
defectdojo
Vulnerability management
We use this command in order to get the admin password for the dashboard.
Vulnerability management
DefectDojo will run in the port 8080, you can then connect to the panel using the username admin, and
the password we found with the command in the last slide. And that’s it, you can know check their
dashboard.
We have defined rules in the docker-deploy job to control its execution based on specific
conditions. The job runs only when the committed branch is main, ensuring deployments are
restricted to the primary branch. Additionally, the rule specifies when: manual, meaning the job
must be triggered manually by a user, preventing automatic execution. These rules provide precise
control over the deployment process.
We can also add protected:true to ensure only Maintainers can deploy, but this feature is available
only on ultimate gitlab.
Push Rules : Implementation of a commit
signature policy for protected branches.
GitLab allows creating Merge Requests in Draft mode (previously called “WIP” – Work In Progress), signaling that
the MR is "work in progress" and cannot be merged until the Draft status is removed. This feature prevents
accidental merges or deployments and communicates clearly to reviewers that the MR is not ready yet.
Automation of pipeline execution only after
multiple approvals.
We have set up approval rules for our project to ensure quality and control over merges. While
no approvals are required for all branches, we’ve enforced a rule requiring 3 approvals for
changes to the main branch under the "Approbations_Multiples" policy. This ensures that critical
changes to the primary branch are reviewed and agreed upon by multiple team members before
merging.
Pre-Commit Hooks : Adding pre-commit hooks
to enforce checks before each commit.
Among the possible solutions in this regard, we can create a .pre-commit-ignore file, or simply
ignore a file if we are certain that there are no issues with it.
We can also create a whitelist for certain types of files that cause false alerts.
Additionally, we can perform lightweight tests for commits but run more intensive tests for
pushes.
Notifications and alerts : Automatisation des
notifications GitLab ( email ) en cas de
détection de vulnérabilités critiques.
First, we create two bash scripts: one to check if
the SAST scan report identifies any critical
vulnerabilities, and another to verify the results
for critical findings and send an email to the
concerned individual, in this case,
“[email protected]”.
These are just a few of the many reasons that make monitoring
For installing prometheus we need to add its repo with helm first :
Once we enter the admin dashboard we can check the data resources :
We can see that prometheus is listed inside grafana data resources, now let's explore it !
Inside Grafana Node exporter Dashboard :