DevOps _ Lab Manual (Complete)
DevOps _ Lab Manual (Complete)
DevOps
SHARE
Labels
Subject: DevOps
Prepared By : Antosh Mahadappa Dyade
INDEX
List of Practical
Experiment No. 1
Introduction:
Git is a distributed version control system (VCS) that helps developers track
changes in their codebase, collaborate with others, and manage different
versions of their projects efficiently. It was created by Linus Torvalds in 2005 to
address the shortcomings of existing version control systems.
Unlike traditional centralised VCS, where all changes are stored on a central
server, Git follows a distributed model. Each developer has a complete copy
of the repository on their local machine, including the entire history of the
project. This decentralisation offers numerous advantages, such as offline
work, faster operations, and enhanced collaboration.
Key Concepts:
Materials:
Computer with Git installed (https://git-scm.com/downloads)
Command-line interface (Terminal, Command Prompt, or Git Bash)
Experiment Steps:
Step 1: Setting Up Git Repository
Open the command-line interface on your computer.
Navigate to the directory where you want to create your Git repository.
Run the following commands:
git init
This initialises a new Git repository in the current directory.
git status
Notice the modified file is shown as "modified."
git diff
This displays the differences between the working directory and the
last commit.
git log
This displays a chronological history of commits.
Merge the changes from the "feature" branch into the "master" branch:
git merge feature
Conclusion:
Through this experiment, participants gained a foundational understanding of
Git's essential commands and concepts. They learned how to set up a Git
repository, manage changes, explore commit history, create and merge
branches, and collaborate with remote repositories. This knowledge equips
them with the skills needed to effectively use Git for version control and
collaborative software development.
Exercise:
Experiment No. 2
Title: Implement GitHub Operations using Git.
Objective:
The objective of this experiment is to guide you through the process of using
Git commands to interact with GitHub, from cloning a repository to
collaborating with others through pull requests.
Introduction:
GitHub is a web-based platform that offers version control and collaboration
services for software development projects. It provides a way for developers
to work together, manage code, track changes, and collaborate on projects
efficiently. GitHub is built on top of the Git version control system, which allows
for distributed and decentralised development.
Materials:
Computer with Git installed (https://git-scm.com/downloads)
GitHub account (https://github.com/)
Internet connection
Experiment Steps:
cd <repository_name>
Create a new text file named "example.txt" using a text editor.
Add some content to the "example.txt" file.
Save the file and return to the command line.
Check the status of the repository:
git status
Conclusion:
This experiment provided you with practical experience in performing GitHub
operations using Git commands. You learned how to clone repositories, make
changes, create branches, push changes to GitHub, collaborate through pull
requests, and synchronise changes with remote repositories. These skills are
essential for effective collaboration and version control in software
development projects using Git and GitHub.
Questions:
1. Explain the difference between Git and GitHub.
2. What is a GitHub repository? How is it different from a Git repository?
3. Describe the purpose of a README.md file in a GitHub repository.
4. How do you create a new repository on GitHub? What information is
required during the creation process?
5. Define what a pull request (PR) is on GitHub. How does it facilitate
collaboration among developers?
6. Describe the typical workflow of creating a pull request and having it
merged into the main branch.
7. How can you address and resolve merge conflicts in a pull request?
8. Explain the concept of forking a repository on GitHub. How does it
differ from cloning a repository?
9. What is the purpose of creating a local clone of a repository on your
machine? How is it done using Git commands?
10. Describe the role of GitHub Issues and Projects in managing a
software development project. How can they be used to track tasks,
bugs, and enhancements?
Experiment No. 3
Objective:
The objective of this experiment is to guide you through the process of using
Git commands to interact with GitLab, from creating a repository to
collaborating with others through merge requests.
Introduction:
GitLab is a web-based platform that offers a complete DevOps lifecycle
toolset, including version control, continuous integration/continuous
deployment (CI/CD), project management, code review, and collaboration
features. It provides a centralized place for software development teams to
work together efficiently and manage the entire development process in a
single platform.
Materials:
Computer with Git installed (https://git-scm.com/downloads)
GitLab account (https://gitlab.com/)
Internet connection
Experiment Steps:
Conclusion:
This experiment provided you with practical experience in performing GitLab
operations using Git commands. You learned how to create repositories, clone
them to your local machine, make changes, create branches, push changes to
GitLab, collaborate through merge requests, and synchronise changes with
remote repositories. These skills are crucial for effective collaboration and
version control in software development projects using GitLab and Git.
Questions/Exercises:
1. What is GitLab, and how does it differ from other version control
platforms?
2. Explain the significance of a GitLab repository. What can a repository
contain?
3. What is a merge request in GitLab? How does it facilitate the code
review process?
4. Describe the steps involved in creating and submitting a merge
request on GitLab.
5. What are GitLab issues, and how are they used in project
management?
6. Explain the concept of a GitLab project board and its purpose in
organising tasks.
7. How does GitLab address security concerns in software development?
Mention some security-related features.
8. Describe the role of compliance checks in GitLab and how they
contribute to maintaining software quality.
Experiment No. 4
Title: Implement BitBucket Operations using Git.
Objective:
The objective of this experiment is to guide you through the process of using
Git commands to interact with Bitbucket, from creating a repository to
collaborating with others through pull requests.
Introduction:
Bitbucket is a web-based platform designed to provide version control, source
code management, and collaboration tools for software development projects.
It is widely used by teams and individuals to track changes in code,
collaborate on projects, and streamline the development process. Bitbucket
offers Git and Mercurial as version control systems and provides features to
support code collaboration, continuous integration/continuous deployment
(CI/CD), and project management.
Materials:
Computer with Git installed (https://git-scm.com/downloads)
Bitbucket account (https://bitbucket.org/)
Internet connection
Experiment Steps:
git status
Conclusion:
This experiment provided you with practical experience in performing
Bitbucket operations using Git commands. You learned how to create
repositories, clone them to your local machine, make changes, create
branches, push changes to Bitbucket, collaborate through pull requests, and
synchronise changes with remote repositories. These skills are essential for
effective collaboration and version control in software development projects
using Bitbucket and Git.
Questions/Exercises:
Q.1 What is Bitbucket, and how does it fit into the DevOps landscape?
Q.2 Explain the concept of branching in Bitbucket and its significance in
collaborative development.
Q.3 What are pull requests in Bitbucket, and how do they facilitate code
review and collaboration?
Q.4 How can you integrate code quality analysis and security scanning tools
into Bitbucket's CI/CD pipelines?
Q.5 What are merge strategies in Bitbucket, and how do they affect the
merging process during pull requests?
Experiment No. 5
Title: Applying CI/CD Principles to Web Development Using Jenkins, Git,
and Local HTTP Server
Objective:
The objective of this experiment is to set up a CI/CD pipeline for a web
development project using Jenkins, Git, and webhooks, without the need for a
Jenkinsfile. You will learn how to automatically build and deploy a web
application to a local HTTP server whenever changes are pushed to the Git
repository, using Jenkins' "Execute Shell" build step.
Introduction:
Continuous Integration and Continuous Deployment (CI/CD) is a critical
practice in modern software development, allowing teams to automate the
building, testing, and deployment of applications. This process ensures that
software updates are consistently and reliably delivered to end-users, leading
to improved development efficiency and product quality.
In this context, this introduction sets the stage for an exploration of how to
apply CI/CD principles specifically to web development using Jenkins, Git, and
a local HTTP server. We will discuss the key components and concepts
involved in this process.
Key Components:
CI/CD Principles:
Materials:
A computer with administrative privileges
Jenkins installed and running (https://www.jenkins.io/download/)
Git installed (https://git-scm.com/downloads)
A local HTTP server for hosting web content (e.g., Apache, Nginx)
A Git repository (e.g., on GitHub or Bitbucket)
Experiment Steps:
Create a Git repository for your web application. Initialize it with the following
commands:
git init
git add .
git commit -m "Initial commit"
Create a remote Git repository (e.g., on GitHub or Bitbucket) to push
your code to later.
Visit the URL of your local HTTP server to verify that the web application has
been updated with the latest changes.
Conclusion:
This experiment demonstrates how to set up a CI/CD pipeline for web
development using Jenkins, Git, a local HTTP server, and webhooks, without
the need for a Jenkinsfile. By defining and executing the build and deployment
steps using the "Execute Shell" build step, you can automate your
development workflow and ensure that your web application is continuously
updated with the latest changes.
Exercises /Questions :
1. Explain the significance of CI/CD in the context of web development. How
does it benefit the development process and end-users?
2. Describe the key components of a typical CI/CD pipeline for web
development. How do Jenkins, Git, and a local HTTP server fit into this
pipeline?
3. Discuss the role of version control in CI/CD. How does Git facilitate
collaborative web development and CI/CD automation?
4. What is the purpose of a local HTTP server in a CI/CD workflow for web
development? How does it contribute to testing and deployment?
5. Explain the concept of webhooks and their role in automating CI/CD
processes. How are webhooks used to trigger Jenkins jobs in response to Git
events?
6. Outline the steps involved in setting up a Jenkins job to automate CI/CD for
a web application.
7. Describe the differences between Continuous Integration (CI) and
Continuous Deployment (CD) in the context of web development. When might
you use one without the other?
8. Discuss the advantages and challenges of using Jenkins as the automation
server in a CI/CD pipeline for web development.
9. Explain how a Jenkinsfile is typically used in a Jenkins-based CI/CD
pipeline. What are the benefits of defining pipeline stages in code?
10. Provide examples of test cases that can be automated as part of a CI/CD
process for web development. How does automated testing contribute to code
quality and reliability in web applications?
Experiment No. 6
Title: Exploring Containerization and Application Deployment with
Docker
Objective:
The objective of this experiment is to provide hands-on experience with
Docker containerization and application deployment by deploying an Apache
web server in a Docker container. By the end of this experiment, you will
understand the basics of Docker, how to create Docker containers, and how to
deploy a simple web server application.
Introduction
Containerization is a technology that has revolutionised the way applications
are developed, deployed, and managed in the modern IT landscape. It
provides a standardised and efficient way to package, distribute, and run
software applications and their dependencies in isolated environments called
containers.
Benefits of Containerization:
Consistency: Containers ensure that applications run consistently
across different environments, reducing the "it works on my machine"
problem.
Portability: Containers are portable and can be easily moved between
different host machines and cloud providers.
Resource Efficiency: Containers share the host operating system's
kernel, which makes them lightweight and efficient in terms of resource
utilization.
Scalability: Containers can be quickly scaled up or down to meet
changing application demands, making them ideal for microservices
architectures.
Version Control: Container images are versioned, enabling easy
rollback to previous application states if issues arise.
DevOps and CI/CD: Containerization is a fundamental technology in
DevOps and CI/CD pipelines, allowing for automated testing,
integration, and deployment.
Materials:
A computer with Docker installed (https://docs.docker.com/get-docker/)
A code editor
Basic knowledge of Apache web server
Experiment Steps:
Step 1: Install Docker
If you haven't already, install Docker on your computer by following the
instructions provided on the Docker website
(https://docs.docker.com/get-docker/).
Step 2: Create a Simple HTML Page
Create a directory for your web server project.
Inside this directory, create a file named index.html with a simple
"Hello, Docker!" message. This will be the content served by your
Apache web server.
Dockerfile
# Copy your custom HTML page to the web server's document root
COPY index.html /usr/local/apache2/htdocs/
Step 7: Cleanup
Stop the running Docker container:
Conclusion:
Exercise/Questions:
Experiment No. 7
Title: Applying CI/CD Principles to Web Development Using
Jenkins, Git, using Docker Containers
Objective:
The objective of this experiment is to set up a CI/CD pipeline for a web
application using Jenkins, Git, Docker containers, and GitHub
webhooks. The pipeline will automatically build, test, and deploy the
web application whenever changes are pushed to the Git repository,
without the need for a pipeline script.
Introduction:
Continuous Integration and Continuous Deployment (CI/CD) principles
are integral to modern web development practices, allowing for the
automation of code integration, testing, and deployment. This
experiment demonstrates how to implement CI/CD for web
development using Jenkins, Git, Docker containers, and GitHub
webhooks without a pipeline script. Instead, we'll utilize Jenkins'
"GitHub hook trigger for GITScm polling" feature.
Materials:
A computer with Docker installed (https://docs.docker.com/get-
docker/)
Jenkins installed and configured
(https://www.jenkins.io/download/)
A web application code repository hosted on GitHub
Experiment Steps:
Step 1: Set Up the Web Application and Git Repository
Create a simple web application or use an existing one. Ensure
it can be hosted in a Docker container.
Initialise a Git repository for your web application and push it to
GitHub.
Conclusion:
This experiment demonstrates how to apply CI/CD principles to web
development using Jenkins, Git, Docker containers, and GitHub
webhooks. By configuring Jenkins to listen for GitHub webhook
triggers and executing Docker commands in response to code
changes, you can automate the build and deployment of your web
application, ensuring a more efficient and reliable development
workflow.
Exercise / Questions :
1. Explain the core principles of Continuous Integration (CI) and
Continuous Deployment (CD) in the context of web
development. How do these practices enhance the software
development lifecycle?
2. Discuss the key differences between Continuous Integration
and Continuous Deployment. When might you choose to
implement one over the other in a web development project?
3. Describe the role of automation in CI/CD. How do CI/CD
pipelines automate code integration, testing, and deployment
processes?
4. Explain the concept of a CI/CD pipeline in web development.
What are the typical stages or steps in a CI/CD pipeline, and
why are they important?
5. Discuss the benefits of CI/CD for web development teams.
How does CI/CD impact the speed, quality, and reliability of
software delivery?
6. What role do version control systems like Git play in CI/CD
workflows for web development? How does version control
contribute to collaboration and automation?
7. Examine the challenges and potential risks associated with
implementing CI/CD in web development. How can these
challenges be mitigated?
8. Provide examples of popular CI/CD tools and platforms used in
web development. How do these tools facilitate the
implementation of CI/CD principles?
9. Explain the concept of "Infrastructure as Code" (IaC) and its
relevance to CI/CD. How can IaC be used to automate
infrastructure provisioning in web development projects?
10. Discuss the cultural and organisational changes that may be
necessary when adopting CI/CD practices in a web
development team. How does CI/CD align with DevOps
principles and culture?
Experiment No. 8
Title: Demonstrate Maven Build Life Cycle
Objective:
The objective of this experiment is to gain hands-on experience with the
Maven build lifecycle by creating a simple Java project and executing various
Maven build phases.
Introduction:
Maven is a widely-used build automation and project management tool in the
Java ecosystem. It provides a clear and standardised build lifecycle for Java
projects, allowing developers to perform various tasks such as compiling
code, running tests, packaging applications, and deploying artefacts. This
experiment aims to demonstrate the Maven build lifecycle and its different
phases.
Project Object Model (POM): The POM is an XML file named pom.xml
that defines a project's configuration, dependencies, plugins, and
goals. It serves as the project's blueprint and is at the core of Maven's
functionality.
Build Lifecycle: Maven follows a predefined sequence of phases and
goals organized into build lifecycles. These lifecycles include clean,
validate, compile, test, package, install, and deploy, among others.
Plugin: Plugins are extensions that provide specific functionality to
Maven. They enable tasks like compiling code, running tests,
packaging artifacts, and deploying applications.
Dependency Management: Maven simplifies dependency
management by allowing developers to declare project dependencies
in the POM file. Maven downloads these dependencies from
repositories like Maven Central.
Repository: A repository is a collection of artifacts (compiled libraries,
JARs, etc.) that Maven uses to manage dependencies. Maven Central
is a popular public repository, and organisations often maintain private
repositories.
Clean Lifecycle:
clean: Deletes the target directory, removing all build artifacts.
Default Lifecycle:
validate: Validates the project's structure.
compile: Compiles the project's source code.
test: Runs tests using a suitable testing framework.
package: Packages the compiled code into a distributable format (e.g.,
JAR, WAR).
verify: Runs checks on the package to verify its correctness.
install: Installs the package to the local repository.
deploy: Copies the final package to a remote repository for sharing.
Site Lifecycle:
site: Generates project documentation.
Materials:
A computer with Maven installed
(https://maven.apache.org/download.cgi)
A code editor (e.g., Visual Studio Code, IntelliJ IDEA)
Java Development Kit (JDK) installed
(https://www.oracle.com/java/technologies/javase-downloads.html)
Experiment Steps:
Step 1: Setup Maven and Java
Ensure that you have Maven and JDK installed on your system. You
can verify their installations by running the following commands:
mvn -v
java -version
Create a pom.xml file (Maven Project Object Model) in the project directory.
This file defines project metadata, dependencies, and build configurations.
Here's a minimal example:
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.example</groupId>
<artifactId>MavenDemo</artifactId>
<version>1.0-SNAPSHOT</version>
</project>
Install Phase: To install the project artifacts (e.g., JAR) into your local
Maven repository, run:
mvn install
Conclusion:
This experiment demonstrates the Maven build lifecycle by creating a simple
Java project and executing various Maven build phases. Maven simplifies the
build process by providing a standardized way to manage dependencies,
compile code, run tests, and package applications. Understanding these build
phases is essential for Java developers using Maven in their projects.
Exercise/Questions:
1. What is Maven, and why is it commonly used in software
development?
2. Explain the purpose of the pom.xml file in a Maven project.
3. How does Maven simplify dependency management in software
projects?
4. What are Maven plugins, and how do they enhance the functionality of
Maven?
5. List the key phases in the Maven build lifecycle, and briefly describe
what each phase does.
6. What is the primary function of the clean phase in the Maven build
lifecycle?
7. In Maven, what does the compile phase do, and when is it typically
executed?
8. How does Maven differentiate between the test and verify phases in
the build lifecycle?
9. What is the role of the install phase in the Maven build lifecycle, and
why is it useful?
10. Explain the difference between a local repository and a remote
repository in the context of Maven.
Experiment No. 9
Title: Demonstrating Container Orchestration using Kubernetes
Objective:
The objective of this experiment is to introduce students to container
orchestration using Kubernetes and demonstrate how to deploy a
containerized web application. By the end of this experiment, students
will have a basic understanding of Kubernetes concepts and how to
use Kubernetes to manage containers.
Introduction:
Container orchestration is a critical component in modern application
deployment, allowing you to manage, scale, and maintain
containerized applications efficiently. Kubernetes is a popular container
orchestration platform that automates many tasks associated with
deploying, scaling, and managing containerized applications. This
experiment will demonstrate basic container orchestration using
Kubernetes by deploying a simple web application.
Materials:
A computer with Kubernetes installed
(https://kubernetes.io/docs/setup/)
Docker installed (https://docs.docker.com/get-docker/)
Experiment Steps:
Step 1: Create a Dockerized Web Application
Create a simple web application (e.g., a static HTML page) or
use an existing one.
Create a Dockerfile to package the web application into a
Docker container. Here's an example Dockerfile for a simple
web server:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-web-app-deployment
spec:
replicas: 3 # Number of pods to create
selector:
matchLabels:
app: my-web-app # Label to match pods
template:
metadata:
labels:
app: my-web-app # Label assigned to pods
spec:
containers:
- name: my-web-app-container
image: my-web-app:latest # Docker image to use
ports:
- containerPort: 80 # Port to expose
Explanation of web-app-deployment.yaml:
apiVersion: Specifies the Kubernetes API version being used
(apps/v1 for Deployments).
kind: Defines the type of resource we're creating (a
Deployment in this case).
metadata: Contains metadata for the Deployment, including its
name.
spec: Defines the desired state of the Deployment.
replicas: Specifies the desired number of identical pods to run.
In this example, we want three replicas of our web application.
selector: Specifies how to select which pods are part of this
Deployment. Pods with the label app: my-web-app will be
managed by this Deployment.
template: Defines the pod template for the Deployment.
metadata: Contains metadata for the pods created by this
template.
labels: Assigns the label app: my-web-app to the pods created
by this template.
spec: Specifies the configuration of the pods.
containers: Defines the containers to run within the pods. In
this case, we have one container named my-web-app-
container using the my-web-app:latest Docker image.
ports: Specifies the ports to expose within the container. Here,
we're exposing port 80.
Conclusion:
In this experiment, you learned how to create a Kubernetes
Deployment for container orchestration. The web-app-
deployment.yaml file defines the desired state of the application,
including the number of replicas, labels, and the Docker image to use.
Kubernetes automates the deployment and scaling of the application,
making it a powerful tool for managing containerized workloads.
Questions/Exercises:
1. Explain the core concepts of Kubernetes, including pods,
nodes, clusters, and deployments. How do these concepts
work together to manage containerized applications?
2. Discuss the advantages of containerization and how
Kubernetes enhances the orchestration and management of
containers in modern application development.
3. What is a Kubernetes Deployment, and how does it ensure
high availability and scalability of applications? Provide an
example of deploying a simple application using a Kubernetes
Deployment.
4. Explain the purpose and benefits of Kubernetes Services. How
do Kubernetes Services facilitate load balancing and service
discovery within a cluster?
5. Describe how Kubernetes achieves self-healing for applications
running in pods. What mechanisms does it use to detect and
recover from pod failures?
6. How does Kubernetes handle rolling updates and rollbacks of
applications without causing downtime? Provide steps to
perform a rolling update of a Kubernetes application.
7. Discuss the concept of Kubernetes namespaces and their use
cases. How can namespaces be used to isolate and organize
resources within a cluster?
8. Explain the role of Kubernetes ConfigMaps and Secrets in
managing application configurations. Provide examples of
when and how to use them.
9. What is the role of storage orchestration in Kubernetes, and
how does it enable data persistence and sharing for
containerized applications?
10. Explore the extensibility of Kubernetes. Describe Helm charts
and custom resources, and explain how they can be used to
customize and extend Kubernetes functionality.
Experiment No. 10
Title: Create the GitHub Account to demonstrate CI/CD pipeline using
Cloud Platform.
Objective:
The objective of this experiment is to help you create a GitHub account and
set up a basic CI/CD pipeline on GCP. You will learn how to connect your
GitHub repository to GCP, configure CI/CD using Cloud Build, and
automatically deploy web pages to an Apache web server when code is
pushed to your repository.
Introduction:
Continuous Integration and Continuous Deployment (CI/CD) pipelines are
essential for automating the deployment of web applications. In this
experiment, we will guide you through creating a GitHub account and setting
up a basic CI/CD pipeline using Google Cloud Platform (GCP) to copy web
pages for an Apache HTTP web application.
Key Components:
A basic CI/CD workflow using GitHub, GCP, and AWS typically includes the
following steps:
Materials:
A computer with internet access
A Google Cloud Platform account (https://cloud.google.com/)
A GitHub account (https://github.com/)
Experiment Steps:
Step 1: Create a GitHub Account
Visit the GitHub website (https://github.com/).
Click on the "Sign Up" button and follow the instructions to create your
GitHub account.
Conclusion:
In this experiment, you created a GitHub account, set up a basic CI/CD
pipeline on Google Cloud Platform, and deployed web pages to an Apache
web server. This demonstrates how CI/CD can automate the deployment of
web content, making it easier to manage and update web applications
efficiently.
Exercise / Questions:
Objective:
The objective of this experiment is to introduce you to Terraform and
demonstrate how to create, modify, and destroy infrastructure
resources locally using Terraform's configuration files and commands.
Introduction:
Terraform is a powerful Infrastructure as Code (IaC) tool that allows
you to define and provision infrastructure using a declarative
configuration language. In this experiment, we will demonstrate how to
use Terraform on your local machine to create and manage
infrastructure resources in a cloud environment.
Typical Workflow:
Configuration Definition: Define your infrastructure
configuration using Terraform's declarative syntax. Describe the
resources, providers, and dependencies in your *.tf files.
Initialization: Run terraform init to initialize your Terraform
project. This command downloads required providers and sets
up your working directory.
Planning: Execute terraform plan to create an execution plan.
Terraform analyzes your configuration and displays what
changes will be made to the infrastructure.
Provisioning: Use terraform apply to apply the changes and
provision resources. Terraform will create, update, or delete
resources as needed to align with your configuration.
State Management: Terraform maintains a state file (by default,
terraform.tfstate) that tracks the current state of the
infrastructure.
Modifications: As your infrastructure requirements change,
update your Terraform configuration files and run terraform
apply again to apply the changes incrementally.
Destruction: When resources are no longer needed, you can
use terraform destroy to remove them. Be cautious, as this
action can't always be undone.
Advantages of Terraform:
Predictable and Repeatable: Terraform configurations are
repeatable and idempotent. The same configuration produces
the same results consistently.
Collaboration: Infrastructure configurations can be versioned,
shared, and collaborated on by teams, promoting consistency.
Multi-Cloud: Terraform's multi-cloud support allows you to
manage infrastructure across different cloud providers with the
same tool.
Community and Modules: A rich ecosystem of modules,
contributed by the community, accelerates infrastructure
provisioning.
Terraform has become a fundamental tool in the DevOps and
infrastructure automation landscape, enabling organizations to
manage infrastructure efficiently and with a high degree of
control.
Materials:
A computer with Terraform installed
(https://www.terraform.io/downloads.html)
Access to a cloud provider (e.g., AWS, Google Cloud, Azure)
with appropriate credentials configured
Experiment Steps:
Step 1: Install and Configure Terraform
provider "aws" {
region = "us-east-1" # Change to your desired region
}
resource "aws_s3_bucket" "example_bucket" {
bucket = "my-unique-bucket-name" # Replace with a globally
unique name
acl = "private"
}
terraform destroy
Confirm the destruction by typing "yes."
Explanation:
Conclusion:
In this experiment, you learned how to use Terraform on your local
machine to create and manage infrastructure resources as code.
Terraform simplifies infrastructure provisioning, modification, and
destruction by providing a declarative way to define and maintain your
infrastructure, making it a valuable tool for DevOps and cloud
engineers.
Exercises / Questions
Comments
To leave a comment, click the button below to sign in with
Google.
Example of Maven
project that interacts
with a MySQL database
and includes testing
September 10, 2023
Example Maven project that interacts with a
MySQL database and includes testing To install
Java, MySQL, Maven, and write a Java program
Labels
Report Abuse
Powered by Blogger