0% found this document useful (0 votes)
44 views142 pages

DevSecOps Project

The document outlines a DevSecOps project aimed at automating the build, packaging, and deployment of an Mcommerce app, focusing on the Product microservice to enhance delivery speed, reliability, and security. It details the CI pipeline using GitLab for code management and deployment processes, as well as the use of Kubernetes for local deployment and orchestration. Additionally, it introduces GitOps with ArgoCD for automated infrastructure management and synchronization between Git repositories and Kubernetes clusters.

Uploaded by

Mohammed Hilali
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
44 views142 pages

DevSecOps Project

The document outlines a DevSecOps project aimed at automating the build, packaging, and deployment of an Mcommerce app, focusing on the Product microservice to enhance delivery speed, reliability, and security. It details the CI pipeline using GitLab for code management and deployment processes, as well as the use of Kubernetes for local deployment and orchestration. Additionally, it introduces GitOps with ArgoCD for automated infrastructure management and synchronization between Git repositories and Kubernetes clusters.

Uploaded by

Mohammed Hilali
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 142

DevSecOps

project
Team: Supervisor:

- EL AZHAR Omar - ALLAKI Driss


- EL HOUARI Badr
- BAHYA Ahmed
- HILALI Mohammed
- LAHLAL Mohamed Amine
Overall Description of the project

This project implements a


DevSecOps PoC for KATA to
automate the build, packaging,
and deployment of the
Mcommerce app. Focused on the
Product microservice, it aims to
improve delivery speed,
reliability, and security.
1
Cr
éa
tio
S n
in et u d’
te p un
gr a
at ut CI
io om
n a pi
w te pe
or d
kfl b
u lin
ow il
d
s. s,
e
te
st

2
s,
an
éD d
Ku plo
be ie
D rn me
et n
fo epl es t l
r c oy
o
nt a o ca
ai ppl la
ne ic
r o ati ve
o
rc n
he s l c
st oc
ra al
tio ly
n. us
3
in
g
Ex Ku
be
Gi ten rn
tO s et
es
Im
p io
s n
de pl du
cl m e
ar e CI
at nt
iv G pi
e it pe
in O lin
fra ps
st fo
ru r
e
ct au av
ur to ec
e m
4

up at
da ed
Op te an
s. d
VC im t
S i
En et sat
co ha du ion
nt nc
ro e CI et
l a pe Pi Sé
nd r f pe cu
CI orm lin ri
pi an
p c e s
The Presentation Plan

el e at
in an io
es d
. se
n
du
5

cu
re
ve
Ob rs
se ion
rv
In ab
sy teg ili
st ra té
em te
t
pe oo
et
rf ls M
or to on
m ito
an mo
ce ni rin
an tor g
d an
re d
lia a
bi na
lit ly
y. ze
Section 1: CI Pipeline

Automates and streamlines code


integration, testing, and deployment
processes.

We benefit from using pipelines:

● Faster
● Safer
● Simplification & Standardization
● Visualization of the process
Gitlab

GitLab is a complete DevSecOps platform that


supports version control, CI/CD pipelines, and
security integration in a single tool. It enables
teams to automate workflows, improve
collaboration, and enhance software delivery
speed and security. For this CI pipeline, GitLab
will manage code repositories, automate Maven
builds, Docker containerization, and deployment
processes while integrating security checks to
ensure reliable and secure delivery.
Prerequisites

The first thing we need to do is


obviously to push our source code
to a gitlab repository that we will
name ‘microservices-produit’

Hint: This is the file that


contains the configuration of
the CI pipeline
Prerequisites

Next, we will quickly run through the content of our DockerFile and makes sure everything is
perfectly lined up

- Base Image: Use OpenJDK slim image.


- Create Group/User: Add a "spring"
group and user.
- Set User: Switch to "spring" user.
- Define Build Arg: Specify JAR file
path.
- Copy JAR: Add application JAR to the
image.
- Expose Port: Open port 9001.
- Run Command: Start the app with
java -jar.
The actual pipeline configuration

Now that the environment is set up nicely, let’s get PIPELINING !

Remember our
Well that’s the .gitlab-ci.yml file.
friend over here?

Our pipeline will be divided into three stages:


- Build
- Package
- Deploy
But how does this file represent the pipeline itself we ask?
Time to get to the technical part (the fun part)!
.gitlab-ci.yml

Let’s explain what each


line in this file represents
This part of the file specifies the Docker image
(docker:latest) to be used for the CI job and
includes the docker:dind service
(Docker-in-Docker). This allows the CI pipeline
to run Docker commands within the job,
enabling tasks like building and running
Docker containers.

This section defines the stages of the


CI/CD pipeline: build, package, and
deploy. Each stage represents a step in
the process, with jobs executed in
sequence to build the code, package it,
and then deploy it.

This configuration defines a job named maven-build in


the build stage. It uses the
maven:3.9.4-eclipse-temurin-17 Docker image to run
Maven commands. The job executes mvn clean install
-U -B to clean and build the project. The resulting JAR
files from the target directory are saved as artifacts for
use in later stages.
The Docker-build phase

This is the docker-build phase where we have :


- docker-build job: Runs in the package stage.
- before_script: Logs into Docker using the username
and password stocked in the gitlab CI/CD secret
variables
- script:
● Builds the Docker image with the tag
omarelazhar/microservices-produit.
● Pushes the built Docker image to the Docker
registry (omarelazhar/microservices-produit).
A common good practice for using the docker
password to log in

Writing the password using -p directly in the command ( as displayed in the first line) exposes it in plain text,
which is a security risk. This method makes the password visible in the command history and process list,
potentially compromising the credentials. A more secure approach is to pass the password via environment
variables and use --password-stdin, which avoids exposing sensitive information in plain text while still allowing
the login process to authenticate securely.
The final stage in our pipeline

The docker-deploy job runs in the deploy stage. It executes a Docker command to start a container in
detached mode (-d) using the omarelazhar/microservices-produit image. The container's port 9001 is
mapped to port 9001 on the host (-p 9001:9001), and the container is named microservices-produit.

Next, we will test the pipeline to ensure that the deployment was successful and the application is
accessible as expected. This will help verify that the entire pipeline—from build to deployment—works
correctly.
Checking the results

Is that
it?
What’s next?

With our CI pipeline successfully set up and currently


running on GitLab's Shared Runners, our next step is to
integrate a Kubernetes local cluster into the workflow.
This will allow us to deploy and test our applications in an
isolated, containerized environment, providing greater
control and flexibility as we transition to using a local
runner for the pipeline.
Section 2: Local Deployment

Facilitates running and testing


applications in a controlled local
environment.

We benefit from using local deployment:

● Faster feedback loops


● Easier debugging
● No reliance on external
infrastructure
● Improved development workflow

This step is also called:

“Container Orchestration”
Kubernetes

Kubernetes is a leading open-source


platform for automating the deployment,
scaling, and management of containerized
applications. It ensures high availability,
efficient resource use, and offers features
like automated rollouts, self-healing, and
load balancing, making it essential for
managing cloud-native applications at
scale.
Prerequisites

Running Docker Desktop is an essential step for


using Kubernetes, as it provides the necessary
container runtime and environment to manage and
run containerized applications. Docker Desktop
simplifies the setup by including built-in Kubernetes
support, ensuring seamless integration and
compatibility with local clusters.

Obviously, no
containers are
running yet.
Prerequisites

To use Kubernetes and create a local


cluster, we rely on Minikube. Minikube is a
lightweight tool that sets up a
single-node Kubernetes cluster on a local
machine, making it ideal for development
and testing purposes.

Since we are working as a team and each of us has specific tasks to complete,
everyone has installed Minikube on their own machines and set up their local
Kubernetes clusters. This ensures that we can work independently while
staying aligned. The steps we followed for the setup are explained in detail
later.
Creating a local cluster

We run the minikube start command


to create our cluster :

We can visualize our container


running on Docker Desktop :
Creating a namespace

In Kubernetes, a namespace is a logical partition within a cluster that allows for the
isolation and organization of resources. It is particularly useful when multiple teams
or projects share the same cluster, as it prevents resource conflicts by grouping
related resources (such as pods, services, and deployments) under a unique identifier.

We create a namespace called


product-local-use :
Creating a namespace

To verify the creation of our namespace:

We have used kubectl, the command-line tool for


interacting with Kubernetes clusters. Kubectl allows
us to deploy, manage, and troubleshoot applications
running in the cluster. It has been an essential tool for
effectively working with Kubernetes throughout our
project.
YAML files

We have created two important configuration files for managing our


Kubernetes resources: deployment.yaml and service.yaml.

● deployment.yaml defines how our application should be deployed on


the cluster, including the number of replicas, container images, and any
necessary environment variables. It ensures that our application runs
consistently and is easily scalable.
● service.yaml defines how the application will be exposed to other
services or the outside world. It specifies the type of service (ClusterIP,
NodePort, LoadBalancer) and maps the application to a stable endpoint,
enabling communication between pods and external traffic.

These files are necessary because they allow us to declaratively manage and
maintain our application’s deployment and networking within the Kubernetes
cluster.
deployment.yaml

The apiVersion: apps/v1 specifies the stable API


version for managing deployments in Kubernetes. The
kind: Deployment defines the resource type as a
Deployment, ensuring that the desired number of
pods are running with the specified configuration.

This section contains metadata about the deployment,


including the name of the deployment and the
namespace in which the deployment will reside.

The replicas field specifies the number of pod replicas to


run, ensuring redundancy and load balancing, with 3
replicas in this case. The selector field ensures that the
deployment manages the correct set of pods by matching
pods with the label app: product, ensuring only the
intended pods are managed by this deployment.
deployment.yaml

The labels for the pod. In this case, it


labels the pod with app: product,
allowing the deployment to match and
manage the pod.

The pod specification defines the containers


that will run inside the pod. The containers
section specifies the container's name as
product, the image to use, and the ports the
container exposes. In this case, the container
exposes port 8080, which is the port on which
the application inside the container listens.
service.yaml

Defines the resource type as a Service,


which enables network access to the
pods running the application.

The selector ensures that the service routes


traffic only to pods with the label app: product.
The ports section exposes port 80 for external
traffic, which is forwarded to port 8080 inside
the pods where the application is running,
using the TCP protocol. The type: ClusterIP
specifies that the service is only accessible
within the Kubernetes cluster, enabling
internal communication between services
without exposing it to external access.
Applying the files

Now that the files have been created, all that’s left is to apply them in our local
cluster using the kubectl apply -f filename.yaml command:
Verifying the changes

Now we verify the creation of the deployment, the replicaSet and the
service using the kubectl command:
Section 3: Extension du CI pipeline avec GitOps

What is GitOps, anyway ?


GitOps is an operational framework that
incorporates DevOps best practices used for
application development, GitOps is used to
automate the process of provisioning
infrastructure, especially modern
infrastructure in the cloud. Just as teams use
application source code, operations teams
adopting GitOps use configuration files
stored as Infrastructure as Code.
Section 3: Extension du CI pipeline avec GitOps
How are teams putting GitOps into practice?

GitOps is not an all-in-one product, plugin, or platform. There is no universal answer to this question; indeed, the
best way for teams to practice GitOps depends on their specific needs and goals.However, GitOps requires three
basic components :
Infrastructure as code IaC:
GitOps uses a Git repository as a single source of truth for defining infrastructure.
Merge request:
GitOps uses merge requests or push requests as the change mechanism for all infrastructure updates.
CI/CD:
GitOps automates infrastructure updates using a Git workflow with continuous integration and delivery (CI/CD).
When new code is merged, the CI/CD pipeline applies that change to the environment.
Section 3: Extension du CI pipeline avec GitOps
What does a GitOps workflow look like?

1- A developer makes a change in Git (updates a YAML file for

infrastructure).

2- The team reviews and merges the change into the main

branch.

3- A GitOps tool ( Flux, ArgoCD) detects the update and

applies it to the environment.

4- If issues arise, the system can be rolled back using Git.


Section 3: Extension du CI pipeline avec GitOps
What tool will we use in our project?

There are many tools available, each with its strengths and specific use cases, but after evaluating our project’s
needs and objectives, ArgoCD is the best choice.

ArgoCD:

ArgoCD stands out as a GitOps-based tool that ensures


efficient and automated continuous deployment while
maintaining synchronization between Git repositories and
Kubernetes clusters. Its declarative approach, scalability, and
user-friendly interface make it an ideal solution for managing
the deployment pipelines in our project.
Section 3: Extension du CI pipeline avec GitOps
ArgoCD allows us to:

Git and Kubernetes Synchronization:


ArgoCD monitors Git repositories containing Kubernetes
configuration files and compares the desired state (defined in
Git) with the actual state of the resources in the Kubernetes
cluster. If discrepancies are detected, ArgoCD can automatically
synchronize the cluster to match the state defined in Git.

Automated Deployments:
When an update is pushed to Git, ArgoCD can automatically
apply these changes to the Kubernetes cluster in real time.

Declarative Management:
All infrastructure and applications are managed declaratively,
ensuring consistency across environments.
Section 3: Extension du CI pipeline avec GitOps

So, our application (product-service) is already deployed in a Kubernetes cluster.

The next steps will be:

- Namespace argocd : We will create a specific namespace for ArgoCD.


- ArgoCD Installation : We will install ArgoCD in the Kubernetes cluster and access its user
interface to manage our application.
- Access and Configuration : We will retrieve the administrator password for ArgoCD and log in
to its UI.
Section 3: Extension du CI pipeline avec GitOps
Namespace argocd :

Creating a dedicated namespace for ArgoCD ensures clean, secure, and organized management of its resources
within the Kubernetes cluster. It is a crucial step to follow best practices for Kubernetes deployment.
Section 3: Extension du CI pipeline avec GitOps
ArgoCD Installation :

Now, we will apply the Argo CD installation manifest to deploy Argo CD in the 'argocd' namespace. This manifest
installs all the necessary components of Argo CD, such as controllers and services.

The ArgoCD installation manifest is a configuration file that contains all the necessary instructions to deploy
ArgoCD and its components in a Kubernetes cluster. It ensures that everything is properly configured and aligned
with the requirements, simplifying the installation and management of ArgoCD.
Section 3: Extension du CI pipeline avec GitOps
ArgoCD Installation :
Accessing the Argo CD Web Interface:
So in this stage, we will forward the port of the Argo CD service to our local machine to access its web interface.

This command forwards port


443 (used by the Argo CD
service)
This URL https://localhost:8080 allows us to access the Argo CD user interface. in the Kubernetes cluster to port
8080 on our local machine.
Section 3: Extension du CI pipeline avec GitOps
Access and Configuration:

Retrieving the Admin Password:


The Admin Password in Kubernetes Secret: By default, Argo CD creates an admin user with the initial password
stored in a Kubernetes secret.To retrieve the password, we will run the following command:

Decoding the Password: The password is encoded in Base64. To display it in plain text, we need to decode it.To
decode, we will use the following command:
Section 3: Extension du CI pipeline avec GitOps
Access and Configuration:

Once the password is retrieved, you can log in


to the Argo CD user interface with the
following information
Username: admin
Password: The retrieved password
Section 3: Extension du CI pipeline avec GitOps
Access and Configuration:

Now,we can start configuring Argo CD to sync our Kubernetes application with the Git repository and
manage the deployment of our application via GitOps.

To make ArgoCD work, we need to create a file (application.yaml) on which it will base its
operation.There are two methods to create the application manifest application.yaml. The first method
is to go directly to the Git repository and create the application.yaml file. The second method is to use
Git via the CLI, clone the repository, create the file, and then push it to the main branch
Section 3: Extension du CI pipeline avec GitOps
3 Access and Configuration:

The second method :

This Argo CD manifest configures an application called produita-app.


Source: The application's manifests are retrieved from a Git repository
(https://gitlab.com/omarazzhar.03/microservices-produit).
Destination: The resources are deployed in the product-local-use
namespace of the Kubernetes cluster.
Automation:
Automatic synchronization of changes in Git with the cluster.
Support for the automatic creation of the target namespace.
Auto-healing of drift and cleanup of obsolete resources.
Section 3: Extension du CI pipeline avec GitOps
3 Access and Configuration:

We can verify that our file has


indeed appeared in the
repository.
Section 3: Extension du CI pipeline avec GitOps
3 Access and Configuration:

After creating the application.yaml file, which is used to configure our application in Argo CD, we can apply the
application.yaml file to our Kubernetes cluster through Argo CD.

This command will create an Application resource


(produita-pp) in the argocd namespace, and Argo
CD will start monitoring the Git repository for any
updates.
Section 3: Extension du CI pipeline avec GitOps
3 Access and Configuration:

Checking the Application in the Argo CD


UI Once logged into the Argo CD
interface, you can see your deployed
application listed under the applications
section. If the application has been
successfully created and is being
deployed, its status will show as Synced
and Healthy if everything is functioning
properly.
Section 3: Extension du CI pipeline avec GitOps
3 Access and Configuration:

We can verify if our ArgoCD application is functioning correctly using the command-line interface (CLI).

The important columns are:

HEALTH: This indicates whether our application is in a


healthy state.

SYNC STATUS: This indicates whether our application is


synchronized with the Git repository.
Section 3: Extension du CI pipeline avec GitOps
3 Access and Configuration:

Source Code Modification to Observe End-to-End Automated Deployment

To test whether our ArgoCD application is working correctly and if


automated deployment is properly configured, we will open the
deployment.yaml file, which defines the state of your Kubernetes cluster,
and modify its content. This change will trigger the automated deployment
process, allowing you to observe how ArgoCD detects the modification
and applies it to the cluster.

In the replicas section of the file, modify the value of


replicas (for example, change it from 2 to 3)
Section 3: Extension du CI pipeline avec GitOps
3 Access and Configuration:

Source Code Modification to Observe End-to-End Automated Deployment

We wait for ArgoCD to automatically synchronize the changes. If automatic synchronization is configured, the change will be
applied automatically.

Expected Result
Section 4: Optimizing and securing the VCS
and the CI pipeline

The optimizing and securing the VCS And the CI pipeline phase ensures an efficient and secure
development lifecycle by optimizing CI/CD pipelines with custom runners, caching, reusable jobs, and
controlled job execution. It strengthens security through branch protection, commit signing, and
automated testing (SAST, SCA, DAST), while implementing granular permissions and vulnerability
thresholds to prevent critical risks. Automated notifications and cleanup further streamline operations
and resource management.
Setup a custom Gitlab Runner

Using a custom GitLab runner instead of the shared ones is like swapping a ride on a public bus for a
private jet. On the public bus (a.k.a., the shared runner), you're just another passenger among many,
waiting your turn and hoping there's no traffic. But with your private jet (your custom runner), you
soar above the delays and inefficiencies, cruising in an environment that's tailored just for your
project's needs, with no unnecessary stops slowing you down.
Setup a custom Gitlab Runner

We tried first to run the gitlab-runner


locally, but it wasn’t very convenient.
So we decided to deploy it to the
cloud.

I know it sounds scary but it’s not that


difficult.

We used the free 200$ credit of the


github student pack in DigitalOcean.
Setup a custom Gitlab Runner

For this task, we choose to pick the "Regular" option with 4 GB / 2 CPUs, featuring a SSD disk
at $24/month. This choice provides us with robust performance and enhanced storage
capabilities, ideal for handling our project's more demanding processing and speed
requirements efficiently.
Setup a custom Gitlab Runner

After a 3 minutes wait, the


droplet is created.

You can know get your droplet’s


ipv4 address to further use it to
connect with ssh.

You can also see the CPU, Ram or


disk usage and many more
options.
Setup a custom Gitlab Runner

After creating the machine with DigitalOcean,


we got this ip address: 165.22.27.191, we will
then connect to it with ssh to install and
configure the Gitlab-runner.
Setup a custom Gitlab Runner

Now we will install Docker in this Ubuntu


24.04 machine since it is a requirement for
Gitlab Runner. We will use the official
documentation of Docker found here:
https://docs.docker.com/engine/install/ubu
ntu/

Before installing Docker, lets install curl


and ca-certificates.
Setup a custom Gitlab Runner

First, we downloaded Docker's GPG


key and set up the software
repository. After that, we updated the
system's package list and installed
the necessary Docker components.
The final command shown is docker
ps, which confirms that Docker was
installed successfully.

Trust us, this is only the beginning of


a long nightmare.
Setup a custom Gitlab Runner

12 long hours lost trying to figure out why


Gitlab couldn’t connect to the docker
daemon.

Don’t be like us and enable the remote


access to Docker. You can use any port
you want, i chose 2375 which is the
default port for Docker.
(Don’t forget to restart Docker)
Setup a custom Gitlab Runner

In this step, we are setting up a custom


GitLab runner on a Linux system. First,
we connect to GitLab's repository using
command lines, and then we install the
GitLab runner with a command apt-get
install gitlab-runner.

As the progress bar moves forward, it


shows our setup getting ready. This
helps our projects run faster and
smoother, making our software
development quicker and more efficient.
Setup a custom Gitlab Runner

In order to connect our custom


runner to Gitlab, we need to get
the registration token which can be
found in:

Settings > CI/CD > Runners

You also need to disable the


shared runners provided from
GitLab to use only our custom
runner.

(Keep this token secret and don’t


push it in your repo as someone in
our group did.)
Setup a custom Gitlab Runner

Now you just need to connect to


runner to Gitlab. For this, you
need to use the command
gitlab-runner register.

You will be asked to provide the


registration token, a name for
the runner and the executor. We
chose to use docker.

Now the runner is registered


successfully.
Setup a custom Gitlab Runner

In this section, we run the gitlab-runner with the command gitlab-runner run. The runner will then
wait for gitlab to send him a job to execute.

Here, the runner just received a job from Gitlab, and started the execution of it
Create rules to control pipeline

In GitLab CI, rules are essential because they dictate when and how specific CI jobs should run,
depending on the changes made in a project. This helps to optimize the usage of resources and ensures
that the pipeline executes relevant tasks only when needed. In this example, this jobs only run when the
source is a Merge Request or a Push.
Using cache with dependencies

In GitLab CI, caching is used to store specific files between jobs to reduce the build time. For this
example, the cache configuration caches the Maven dependencies located in the ~/.m2/repository
directory. This approach prevents Maven from re-downloading the same dependencies for every build.
Generic Jobs for a better reuse
Generic jobs in GitLab CI are useful because they provide a way to define common tasks that can be
reused across multiple projects or stages within a pipeline. This approach helps in maintaining
consistency, reducing duplication, and simplifying the maintenance of your CI configurations. In our case
we divided the jobs in gitlab-ci.yml into 6 files. Each file is a stage of the CI pipeline.

Add this to gitlab-ci.yml to import the generic Jobs The files for each stage of the pipeline
we created.
SECURITY SCANS
Software Composition Analysis
Software Composition Analysis acts like a vigilant code inspector for your software projects,
meticulously examining each open-source component and library for potential
vulnerabilities. It scours through your project's dependencies, cross-referencing each one
against databases of known security vulnerabilities.

snyk Retire.JS
Software Composition Analysis
Software Composition Analysis (SCA) - Snyk
We will use the pre-build.yml file to implement an SCA security test. To integrate Snyk
testing, the following modifications will be made:

The Snyk CLI will be installed and authenticated


using the $SNYK_TOKEN environment variable.

A snyk test will be executed to scan project


dependencies, and the results will be saved in JSON
format as snyk-results.json.

The Snyk results will be stored as artifacts with a


3-day retention period, providing traceability and
accessibility.
Software Composition Analysis
Software Composition Analysis (SCA) - Snyk

Snyk JSON Report:

This section contains an array of vulnerabilities found in the project. Each


entry includes:
A unique identifier for the vulnerability ( CVE or Snyk ID).

title: A brief description of the issue.

severity: The severity level of the issue (low, medium, high, or critical).
Software Composition Analysis
Software Composition Analysis (SCA) - RetireJS
Using two SCA tools enhances vulnerability detection, reduces false positives/negatives, and provides more
comprehensive security coverage.

So, we will use the same pre-build.yml file to implement the second security test, which is Retire.js. In the
pre-build.yml, we will add a job to run Retire.js before the build process starts. The job installs Retire.js
globally, scans the project’s dependencies, and generates a JSON report with a list of high-severity
vulnerabilities. Key steps for this are:

The npm install -g retire command installs


RetireJS globally so it can be used in the next step.

This command runs the RetireJS scan and creates a


report of high-severity vulnerabilities.

The report is saved as an artifact (retirejs-report.json)


for 3 days, ensuring that it can be accessed later.
Software Composition Analysis
Software Composition Analysis (SCA) - RetireJS

RetireJS report

The report provides detailed insights into the affected libraries,


the nature of the vulnerabilities, and suggested fixes.

Each vulnerability is linked to a specific file within the project


where the issue originates
Static Application Security
Testing
Static Application Security Testing (SAST) serves as a rigorous code auditor for your software,
examining the source code at rest to identify potential security flaws before the application even runs.
SAST tools perform a deep dive into your codebase, analyzing the structure, data flow, and coding
practices to pinpoint weaknesses such as SQL injection.

horusec semgrep
Static Application Security Testing
Horusec Security Test (SAST)
In the pre-build.yml file, we will add the a job to run Houtcs SAST test to perform a static analysis on our
code before the build stage.

This job uses Docker to run the Horusec CLI tool inside a container. The scan checks the project for security
vulnerabilities in our project . The results are output as a JSON file (result.json), which can be used for
further review or integration into other processes. The docker:dind service ensures that Docker commands
can be run inside the container, allowing Horusec to access and scan your project files.
Static Application Security Testing
Horusec Security Test (SAST)

A Horusec Security Test (SAST) report identifies vulnerabilities within source code by performing static analysis. This process evaluates the code
without executing it, providing insights into potential security flaws, their severity, and remediation suggestions. By addressing these issues early,
vulnerabilities can be resolved before deployment.

Horusec Security Test (SAST) Report

So in this report, as in other security testing


reports, it will provide the vulnerability ID, which
in this case is the reference hash, as well as
the severity of the vulnerability, a detailed
explanation of it, and at the end of the report, a
classification of all the vulnerabilities found.
Static Application Security Testing
Semgrep Security Test (SAST)
The include section in the gitlab-ci.yml file imports external templates and configuration files to
modularize and organize the CI/CD pipeline, making it more maintainable and reusable. By using this method,
we can set up Static Application Security Testing (SAST) in our pipeline.

The template will automatically run a dependency


scanning , which compares the project dependencies
against known vulnerability databases to flag any
potential issues.
Static Application Security Testing
Semgrep Security Test (SAST)

Semgrep report

A Semgrep Security Test (SAST) report detects vulnerabilities in source


code through static analysis. It highlights issues without executing the
code, helping identify security risks early.

Each vulnerability is tied to a file and line number and the report
categorizes vulnerabilities by severity (low, medium, high, critical)
Dynamic Application Security
Testing
Dynamic Application Security Testing (DAST) is akin to a stress test for your software, simulating
real-world attacks on your applications while they are running to identify vulnerabilities. DAST
tools interact with an application from the outside, probing it as an attacker would—by making
requests

Nikto Nmap
Dynamic Application Security Testingc
Nmap Security Test (DAST)
Nmap is a network scanning tool that performs port scanning, service version detection, and operating system
fingerprinting. In this job, Nmap is used to test the exposed services of a target application dynamically.
This job runs in the test stage of the pipeline to perform dynamic security checks on the running application.

Pulls the Nmap Docker image (hysnsec/nmap) from


the Docker registry.

Runs the Nmap scan inside a Docker container

Results from the scan are saved for 7 days, ensuring they are
accessible for further analysis.

The rules section determines when a job should run in the pipeline. It is configured to execute only for specific triggers,
such as when a merge request is created or code is pushed to the repository. The when: always directive ensures
that the job will run every time these specified triggers occur, ensuring consistent execution for those events.
Dynamic Application Security Testing
Nmap Security Test (DAST)
The report generated by an Nmap Security Test provides a comprehensive overview of the security posture of a target
system . Nmap, a popular open-source tool, conducts various tests to detect vulnerabilities and assess the security
risks associated with a network or application in real-time.

Nmap report

The report includes details such as open ports,


services running on those ports, and potential
weaknesses or misconfigurations that could be
exploited by attackers.It categorizes vulnerabilities
based on their severity. Additionally, the report
includes information about the nature of each
vulnerability, such as CVE identifiers, potential
impacts, and recommendations for remediation.
Dynamic Application Security Testing
Nikto Security Test (DAST)
Nikto is a web vulnerability scanner that detects potential security issues in web servers. In this job, Nikto
dynamically scans the target web application for vulnerabilities.
This job also runs in the test stage, performing security checks on the live web application.

Pulls the Nikto Docker image (hysnsec/nikto)


from the Docker registry.

This command runs a temporary Docker container to


perform a Nikto web vulnerability scan .The current
working directory is mounted to save scan results locally,
and the container is automatically removed after the scan
completes.

Artifacts: Scan results are stored for 7 days, ensuring they remain accessible for review or audits.
Rules: The job is configured to run only for merge request or push events, and when: always ensures it runs consistently for these triggers.
Dynamic Application Security Testing
Nikto Security Test (DAST)

A report generated by a Nikto Security Test provides an in-depth analysis of web application vulnerabilities. Nikto is a widely used
open-source tool that scans web servers for security flaws, misconfigurations, and outdated software that could be exploited by
attackers.

Nikto report
The report includes information on identified vulnerabilities
,security misconfigurations, and issues related to web server
settings or outdated software versions.

Each finding is accompanied by a detailed description of the


vulnerability, associated risks, CVE identifiers (if applicable), and
suggested remediation actions.

This report is essential for web administrators and security


professionals to address and prioritize the most critical
vulnerabilities, helping to improve the security posture of web
applications and reduce the risk of successful cyberattacks.
Container Security
Container security is crucial in ensuring that the applications running inside containers are
protected from threats and vulnerabilities across their deployment lifecycle. This security
discipline focuses on the protection of individual containers, their applications, and the
underlying infrastructure they interact with.

Gitlab Container Security


Container Security
Container Security Test (SAST)

Container scanning is a security process that analyzes container


images to detect vulnerabilities in their base layers, libraries, and
configurations.
It ensures that images being deployed do not introduce known
vulnerabilities into the environment.

This template is included to enable GitLab's Container Scanning


feature.
The job runs predefined container scanning tasks to detect Common
Vulnerabilities and Exposures (CVEs) in the image layers.
Container Security
Container Security Test (SAST)

GitLab's Container Scanning report

A report generated by GitLab's Container Scanning feature provides an


extensive security analysis of Docker containers and other containerized
applications.

GitLab’s Container Scanning identifies vulnerabilities within container images


by scanning both the operating system libraries and the application code
running inside containers. It also offers information on the specific packages or
dependencies within the container that are affected and their potential impact
on the containerized application.

This report is crucial for ensuring that containerized applications are secure
before deployment, minimizing the risk of security breaches in production
environments.
Vulnerability management
DefectDojo is an open-source application vulnerability correlation and security
orchestration tool designed to streamline the security testing process by
automating vulnerability management and facilitating effective security program
management. It was developed with the intention of simplifying the complex task of
tracking and managing the multitude of security vulnerabilities

defectdojo
Vulnerability management

To install DefectDojo, it’s


quick simple, you just need to
clone their official repo. And
then run a script which will do
all the installation for you.

We use this command in order to get the admin password for the dashboard.
Vulnerability management
DefectDojo will run in the port 8080, you can then connect to the panel using the username admin, and
the password we found with the command in the last slide. And that’s it, you can know check their
dashboard.

DefectDojo Login DefectDojo Dashboard


Automate scans result

We used this Python script to send the


security scan reports from the pipeline to
DefectDojo.

It uses the api of the DefectDojo


dashboard. It provides you with a endpoint
to import a scan: /api/v2/import-scan/. You
just need to get a token to authenticate
yourself.
Automate scans result

Now that we have created the python script, we


need to use it in the pipeline to automate the
scans result.

After the scan result is generated in the pipeline,


we can add python3 sendtodashboard.py
“filename” to the scan jobs.
DefectDojo Dashboard
Here we received 17 findings in the last seven days from our security reports,
most of them are low and medium, but there is some highs.
DefectDojo Dashboard
You see more details of each finding by clicking on View Findings Details.
Cleaning old artifacts
GitLab artifacts are files generated during the jobs in a CI/CD pipeline, such as binaries,
logs, or test results, which are stored by GitLab after a job completes.
But as you see in the image below, they can take some space in storage, that why it is
essential to clean them.
Cleaning old artifacts

You can easily enable expiry date of


any artifact. You just need to add
the artifacts section at the end on
your job and specify how much time
do you want to keep the artifacts.

For our pipeline we chose 3 days for


build artifacts and 7 days for
security scan logs.
Cleaning old pipelines
As you can see, we have a lot of pipelines created and most of them are failed pipelines. We need
to clean them and keep only the passed pipelines related to the main branch.
Cleaning old pipelines

You can delete them manually but it


takes a lot of time to delete hundreds of
pipeline. That’s why we created a
Python script to do it for us instead.

We used the lib gitlab which allows you


to connect to your project and manage
the pipelines.

We will see in the next slide how to get


the token and the project_id
Cleaning old pipelines

For the token, navigate to:


Profile > Access token > Add new
token > Copy the token.

This token will be used to


authenticate to your Gitlab account.

For the project ID, navigate to


your project repo, then click the
3 dots to the right and copy
project ID.
Cleaning old pipelines

Now we just need to run the script and


wait for it the do its work. It will loop
through all the pipelines and check if
they are older than 7 days or failed to
delete them.

As you can see, we only kept the


passed pipelines. We deleted 31
pipelines in total (22 failed + 9 older
than 7 days).
Management of Groups and Permissions in
GitLab

First, we start by defining the roles of the project


members:
Configuration of granular permissions to secure
access (code, pipelines, configurations).

We limit access to the project repository by restricting


key actions to "Only Project Members." This includes
viewing and editing files, submitting merge requests,
forking the repository into new projects, and utilizing
CI/CD pipelines for building, testing, and deploying code.
Configuration of granular permissions to secure
access (code, pipelines, configurations).

We configure job token permissions to control


which projects can use CI/CD job tokens to
authenticate with this project. Access is restricted
to "Only this project and any groups and projects in
the allowlist." We define an allowlist, ensuring only
authorized groups or projects, such as
omarazzhar.03/microservices-produit, can use job
tokens for sensitive project data
Branches Protection

We configure protected branches to enhance security by restricting modifications


to stable branches. For the main branch, only Maintainers are allowed to merge
and push changes, ensuring controlled access. Force-pushing is disabled to
preserve commit history and prevent unintended overwrites.
Branches Protection

We have defined rules in the docker-deploy job to control its execution based on specific
conditions. The job runs only when the committed branch is main, ensuring deployments are
restricted to the primary branch. Additionally, the rule specifies when: manual, meaning the job
must be triggered manually by a user, preventing automatic execution. These rules provide precise
control over the deployment process.

We can also add protected:true to ensure only Maintainers can deploy, but this feature is available
only on ultimate gitlab.
Push Rules : Implementation of a commit
signature policy for protected branches.

We are setting up an OpenPGP certificate using the


Kleopatra application to secure our communications
and data. By entering a name and email address, we
personalize the certificate to identify ourselves
uniquely.
Push Rules : Implementation of a commit
signature policy for protected branches.

We have successfully created an OpenPGP key


pair where the primary key is configured for
certifying and signing. This key has a unique
fingerprint (FB65 0A57 5532 13CC F9CD 23BD
51EE 7B1C F80D A479) and uses the ECC
(Ed25519) algorithm. Its primary role is to certify
other keys (establishing their trustworthiness)
and to sign data or messages, ensuring
authenticity and integrity.

We then export the key, to add it to gitlab.


Push Rules : Implementation of a commit
signature policy for protected branches.

Configuring Git to sign commits:


Push Rules : Implementation of a commit
signature policy for protected branches.

Adding Signing key to gitlab:


Draft Merge Request Strategy : Use of draft
merge requests to prevent premature
deployments.

GitLab allows creating Merge Requests in Draft mode (previously called “WIP” – Work In Progress), signaling that
the MR is "work in progress" and cannot be merged until the Draft status is removed. This feature prevents
accidental merges or deployments and communicates clearly to reviewers that the MR is not ready yet.
Automation of pipeline execution only after
multiple approvals.

We have set up approval rules for our project to ensure quality and control over merges. While
no approvals are required for all branches, we’ve enforced a rule requiring 3 approvals for
changes to the main branch under the "Approbations_Multiples" policy. This ensures that critical
changes to the primary branch are reviewed and agreed upon by multiple team members before
merging.
Pre-Commit Hooks : Adding pre-commit hooks
to enforce checks before each commit.

For pre-commit hooks, we have two


options: either implement them locally
by creating a .git/hooks/pre-commit
file containing the necessary checks,
such as using regex templates, or
perform the verification in the pipeline
to prevent bypassing local checks. We
will opt for the pipeline-based
approach to ensure stricter
enforcement.

N.B: we have used some known github


libraries for verification.
Proposition des solutions pour éviter les
faux-positifs.

Among the possible solutions in this regard, we can create a .pre-commit-ignore file, or simply
ignore a file if we are certain that there are no issues with it.

We can also create a whitelist for certain types of files that cause false alerts.

Additionally, we can perform lightweight tests for commits but run more intensive tests for
pushes.
Notifications and alerts : Automatisation des
notifications GitLab ( email ) en cas de
détection de vulnérabilités critiques.
First, we create two bash scripts: one to check if
the SAST scan report identifies any critical
vulnerabilities, and another to verify the results
for critical findings and send an email to the
concerned individual, in this case,
[email protected]”.

Next, we add the script jobs to the pipeline after


the pre-build stage (which includes the SAST scan)
to utilize the scan's output report.
Section 5: Monitoring
In this section, we will monitor our Kubernetes cluster using tools like Prometheus and Grafana. However,
before diving into the implementation, it’s essential to understand why monitoring is so important:

● Ensure System Reliability and Performance


● Detect and Resolve Issues Early
● Secure the Cluster
● Optimize Resource Utilization

These are just a few of the many reasons that make monitoring

a critical aspect of modern DevOps practices.


Monitoring Tools

Grafana complements Prometheus by


providing visualization capabilities, allowing
users to create interactive and customizable
dashboards for monitoring metrics.

Prometheus is an open-source monitoring


and alerting toolkit designed for reliability
and scalability. It collects metrics from
configured targets, stores them, and allows
querying and alerting on this data.
Before we start we need to install some prerequisites and add some repos like :

For installing prometheus we need to add its repo with helm first :

We need also to add the official helm stable repo :

Now it's time to update the repos :


Now it’s time to install prometheus :

We can check by using this command :


Now lets deploy our microservice application inside a namespace called “microservice”

-> namespace creation :

-> deploy the microservice :


Now lets run Prometheus-Grafana using port-forwarding:

After that we can access to http://localhost:3000/


The default username for Grafana is admin, for the password this command needs be executed :

$ kubectl get secret prometheus-grafana -o jsonpath=”{.data.admin-password}” | base64 –decode

Once we enter the admin dashboard we can check the data resources :

We can see that prometheus is listed inside grafana data resources, now let's explore it !
Inside Grafana Node exporter Dashboard :

You might also like