DevOps 2 Lab Manual 16 Jan 2024
DevOps 2 Lab Manual 16 Jan 2024
LAB MANUAL
Programme(UG/PG) : UG
Semester : VI
repared By
P
Mr. S. N. Jaiswal
Associate Professor
epartment of Computer Science & Engineering
D
FOREWORD
I tismygreatpleasuretopresentthislaboratorymanualforthird
year engineering students for the subject of DevOps Tools &
Techniques Lab.
sastudent,manyofyoumaybewonderingwithsomeofthe
A
questions in your mind regarding the subject and exactly what
has been tried is to answer through this manual.
syoumaybeawarethatMGMhasalreadybeenawardedwith
A
ISO9001:2015,140001:2015certificationanditisourendureto
technically equip our students taking the advantage of the
procedural aspects of ISO Certification.
acultymembersarealsoadvisedthatcoveringtheseaspectsin
F
initialstageitself,willgreatlyrelivedtheminfutureasmuchof
the load will be taken care by the enthusiasm energies of the
students once they are conceptually clear.
r. H. H. Shinde
D
Principal
LABORATORY MANUAL CONTENTS
hismanualisintendedfortheThirdyearstudentsofComputer
T
Science&EngineeringinthesubjectofIntroductiontoDevOps
Tools & Techniques Lab. This manual typically contains
practical/Lab Sessions related to Introduction toDevOpsTools
& Techniques Lab covering various aspects related the subject
to enhanced understanding.
DevOps is the combination of cultural philosophies, practices,
and tools that increases an organization’s ability to deliver
applications and services at high velocity: evolving and
improving products at a faster pace than organizations using
traditionalsoftwaredevelopmentandinfrastructuremanagement
processes.Thisspeedenablesorganizationstobetterservetheir
customersandcompetemoreeffectivelyinthemarketStudents
are advised to thoroughly go through this manual rather than
onlytopicsmentionedinthesyllabusaspracticalaspectsarethe
keytounderstandingandconceptualvisualizationoftheoretical
aspects covered in the books.
Good Luck for your Enjoyable Laboratory Sessions
r.S.N.Jaiswal
M Dr. Deepa Deshpande
Subject Teacher HOD
LIST OF EXPERIMENTS
MGM’s
Jawaharlal Nehru Engineering College, Aurangabad
o develop computer engineers with necessary analytical ability and human values who can
T
creatively design, implement a wide spectrum of computer systems for welfare of the society.
2. Preparing graduates for higher education and research in computer science and
engineering enabling them to develop systems for society development.
I. o analyze, design and provide optimal solution for Computer Science & Engineering
T
and multidisciplinary problems.
II. o pursue higher studies and research by applying knowledge of mathematics and
T
fundamentals of computer science.
III. o exhibit professionalism, communication skills and adapt to current trends by engaging
T
in lifelong learning.
Programme Outcomes (POs):
1. E
ngineering knowledge: Apply the knowledge of mathematics, science, engineering
fundamentals, and an engineering specialization to the solution of complex engineering
problems.
2. P
roblem analysis: Identify, formulate, review research literature, and analyze complex
engineering problems reaching substantiated conclusions using first principles of
mathematics, natural sciences, and engineering sciences.
3. D
esign/development of solutions: Design solutions for complex engineering problems
anddesign system components or processes that meet the specified needs with appropriate
consideration for the public health and safety, and the cultural,societal,andenvironmental
considerations.
4. C
onductinvestigationsofcomplexproblems:Useresearch-basedknowledgeandresearch
methodsincludingdesignofexperiments,analysisandinterpretationofdata,andsynthesisof
the information to provide valid conclusions.
5. M
oderntoolusage:Create,select,andapplyappropriatetechniques,resources,andmodern
engineeringandITtoolsincludingpredictionandmodelingtocomplexengineeringactivities
with an understanding of the limitations.
6. T
heengineerandsociety:Applyreasoninginformedbythecontextualknowledgetoassess
societal, health, safety, legal andculturalissuesandtheconsequentresponsibilitiesrelevant
to the professional engineering practice.
7. E
nvironment and sustainability: Understand the impact of the professional engineering
solutions in societal and environmental contexts, and demonstrate the knowledge of, and
need for sustainable development.
8. E
thics: Apply ethical principles and commit to professional ethics and responsibilitiesand
norms ofthe engineering practice.
9. I ndividualandteamwork:Functioneffectivelyasanindividual,andasamemberorleader
in diverse teams, and in multidisciplinary settings.
10.Communication: Communicate effectively on complex engineering activities with the
engineering community and with society at large, such as, being able to comprehend and
writeeffectivereportsanddesigndocumentation,makeeffectivepresentations,andgiveand
receive clear instructions.
11.Project management and finance: Demonstrate knowledge and understanding of the
engineeringandmanagementprinciplesandapplythesetoone’sownwork,asamemberand
leader in a team, to manage projects and in multidisciplinary environments.
12.Life-long learning: Recognize the need for,andhavethepreparationandabilitytoengage
independent and life-long learning in the broadest context of technological change.
LABORATORY OUTCOMES
Thepractical/exercisesinthissectionarepsychomotordomainLearning
Outcomes (i.e. subcomponents of the COs), to be developed and
assessedtoleadtotheattainmentofthecompetency.Attheendofthis
course student will be able to -
LO-1:Install, configure and use Git on differentplatforms
LO-2:Create Continuous integration using Jenkins
LO-3:Install, configure and use Docker as container
LO-4:Install and use different services of Kuberenetes
LO-5:Write infrastructure as a code scripts usingAnsible
LO-6:Install and use Nagios for continuous monitoring
1. Lab Exercise
HEORY:
T
Introduction:
Webserverisaprogramwhichprocessesthenetworkrequestsoftheusersandservesthemwith
files that create web pages. This exchange takes place using Hypertext Transfer Protocol
(HTTP). Basically, web servers are computers used to storeHTTPfileswhichmakeawebsite
and when a client requestsacertainwebsite,itdeliverstherequestedwebsitetotheclient.For
example, you want to open Facebook on your laptop and enter the URL in the search bar of
google. Now, the laptop will send an HTTP request to view the facebook webpagetoanother
computerknownasthewebserver.Thiscomputer(webserver)containsallthefiles(usuallyin
HTTP format) which make up the website liketext,images,giffiles,etc.Afterprocessingthe
request, the web server will sendtherequestedwebsite-relatedfilestoyourcomputerandthen
you can reach the website.
Differentwebsitescanbestoredonthesameordifferentwebserversbutthatdoesn’taffectthe
actual website that you are seeing in your computer. The web server can be any software or
hardware but isusuallyasoftwarerunningonacomputer.Onewebservercanhandlemultiple
usersatanygiventimewhichisanecessityotherwisetherehadtobeawebserverforeachuser
and considering the current world population, is nearly close to impossible. A web server is
never disconnected from the internet because if it was, then it won’t be able to receive any
requests, and therefore cannot process them.
Working of Web Server:
he step-by-step process of what happens wheneverawebbrowserapproachesthewebserver
T
and requests a web file or file. Follow the below steps:
1. First,anywebuserisrequiredto t ypetheURLofthewebpageintheaddressbarofyour
web browser.
2. With the help of the URL, your web browser will fetch the IP address of your
domainnameeitherbyconvertingtheURLviaDNS(DomainNameSystem)orbylooking
for the IP in cache memory. The IP address will direct your browser to the web server.
3. Aftermakingtheconnection,the w ebbrowserwillrequestforthewebpagefromtheweb
serverwith the help of an HTTP request.
4. As soon as the web server receivesthisrequest,itimmediately r espondsbysendingback
the requested pageor file to the web browser HTTP.
5. If the web page requestedbythe b
rowserdoesnotexistorifthereoccurssomeerrorin
the process, the web server will return an error message.
6. If there occurs no error, the browser will successfully display the webpage.
Journal Write-up
- I ntroduction
- What is web server?
- History of Web Server
- TTP Protocol
H
- Working of Web Server
- Steps to install Apache Tomcat
- Directory Structure of Apache Tomcat
- Configuration of Apache Tomcat
- Conclusion
HEORY:
T
Virtualizationistechnologythatyoucanusetocreatevirtualrepresentationsofservers,storage,
networks, and other physical machines. Virtual software mimics the functions of physical
hardware to run multiple virtual machines simultaneously on a single physical machine.
Businesses use virtualization to use their hardware resourcesefficientlyandgetgreaterreturns
from their investment. It alsopowerscloudcomputingservicesthathelporganizationsmanage
infrastructure more efficiently.
CONCLUSIONS:
Students will learn about the virtualization and tools available for virtualization
o
ecurity
S
Gitissecure.Itusesthe S HA1(SecureHashFunction)tonameandidentifyobjectswithin
its repository. Files and commits are checked and retrieved by its checksum atthetimeof
checkout. ItstoresitshistoryinsuchawaythattheIDofparticularcommitsdependsupon
thecompletedevelopmenthistoryleadinguptothatcommit.Onceitispublished,onecannot
make changes to its old version.
o Speed
Gitisvery f ast,soitcancompleteallthetasksinawhile.Mostofthegitoperationsaredone
on the local repository, so it provides a huge speed. Also, a centralized version control
system continually communicates with a server somewhere.
Performance tests conducted by Mozilla showed that it was e xtremely fast compared to
other VCSs. Fetching version history from a locally stored repository is much faster than
fetching it from the remote server. The core part of Git is written in C,
which ignoresruntime overheads associated with otherhigh-level languages.
Git was developed to work on the Linux kernel; therefore, it is capableenough to h
andle
largerepositorieseffectively. From the beginning, speedand p erformancehave beenGit's
primary goals.
o Supports non-linear development
Gitsupports s eamlessbranchingandmerging,whichhelpsinvisualizingandnavigatinga
non-linear development. A branch in Git represents a single commit. Wecanconstructthe
full branch structure with the help of its parental commit.
o Branching and Merging
Branching and mergingare the great features of Git,which makes it different from the
other SCM tools. Git allows the creation of multiplebrancheswithout affecting each other.
We can perform tasks like creation, deletion, and m
ergingonbranches, and these tasks take
a few seconds only. Below are some features that can be achieved by branching:
o
ecan c reateaseparatebranchforanewmoduleoftheproject,commitanddelete
W
it whenever we want.
o Wecanhavea productionbranch,whichalwayshaswhatgoesintoproductionand
can be merged for testing in the test branch.
o Wecancreatea d
emobranchfortheexperimentandcheckifitisworking.Wecan
also remove it if needed.
o Thecorebenefitofbranchingisifwewanttopushsomethingtoaremoterepository,
wedonothavetopushallofourbranches.Wecanselectafewofourbranches,orall
of them together.
o Data Assurance
The Git data model ensures the cryptographic integrityofevery unit of our project. It
provides a unique commit IDto every commit througha SHA algorithm. We
can r etrieveand updatethe commit by commit ID. Mostof the centralized version control
systems do not provide such integrity by default.
o Staging Area
The Staging areais also a u nique functionalityofGit. It can be considered as a p review of
our next commit, moreover, an intermediate areawherecommits can be formatted and
reviewed before completion. When you make a commit, Git takes changes that are in the
staging area and make them as a new commit. We are allowed to add and remove changes
from the staging area. The staging area can be considered as a place where Git stores the
changes.
Although, Git doesn't have a dedicated staging directory where it can store some objects
representing file changes (blobs). Instead of this, it uses a file called index.
nother feature of Git that makes it apart from other SCM tools is that it is possible to
A
quickly stage some of our files and commit them without committing other modified
files in our working directory.
o Maintain the clean history
Git facilitates with Git Rebase; It is one of the most helpful features of Git. It fetches the
latest commits from the master branch and puts our code on top of that. Thus, it maintains a
clean history of the project.
Benefits of GIT
Aversioncontrolapplicationallowsustokeeptrackofallthechangesthatwemakeinthefiles
of our project. Every time we make changes in files of an existingproject,wecanpushthose
changes to a repository. Other developersareallowedtopullyourchangesfromtherepository
and continue to work with the updates that you added to the project files.
Some significant benefits of using Git are as follows:
Git Installation:
Installing on Windows
There are also a few ways to install Git on Windows. The most official build is available for
downloadontheGitwebsite.Justgotohttps://git-scm.com/download/winandthedownloadwill
startautomatically.NotethatthisisaprojectcalledGitforWindows,whichisseparatefromGit
itself; for more information on it, go to https://gitforwindows.org.
To get an automated installation you can use the Git Chocolatey package. Note that the
Chocolatey package is community maintained.
GitBash:
Git Bash is an application for the Windows environment. It is used as Git command line for
windows. Git Bash providesanemulationlayerforaGitcommand-lineexperience.Bashisan
abbreviation of Bourne Again Shell.GitpackageinstallercontainsBash,bashutilities,andGit
on a Windows operating system.
Experiment 1:
Introduction
Version Control System
Types of version control system
Version Control System tools
History of Git
What is Git?
Properties of Git
Installation of Git on Windows & Linux
Different states of Git
Git Setup and configuration
Terminology of Git
Simple Git commands
Conclusion
HEORY:
T
1. Branching & Merging:
Git Branch
Abranchisaversionoftherepositorythatdivergesfromthemainworkingproject.Itisafeature
availableinmostmodernversioncontrolsystems.AGitprojectcanhavemorethanonebranch.
Thesebranchesareapointertoasnapshotofyourchanges.Whenyouwanttoaddanewfeature
orfixabug,youspawnanewbranchtosummarizeyourchanges.So,itiscomplextomergethe
unstable code with the main code base and also facilitates you to cleanupyourfuturehistory
before merging with the main branch.
Git Master Branch
The master branch is a default branch in Git. It is instantiatedwhenfirstcommitmadeonthe
project. When you make the first commit, you're givenamasterbranchtothestartingcommit
point. When you start making a commit, then master branch pointer automatically moves
forward. A repository can have only one master branch.
Masterbranchisthebranchinwhichallthechangeseventuallygetmergedback.Itcanbecalled
as an official working version of your project.
Operations on Branches
We can perform various operations on Git branches. The git branch command allows you to
create, list, rename and delete branches. Many operations on branches are applied by git
checkout and git merge command.So,thegitbranchistightlyintegratedwiththegitcheckout
and git merge commands.
The Operations that can be performed on a branch:
Create Branch
You can create a new branch with thehelpofthegitbranchcommand.Thiscommandwillbe
used as:
Syntax:
$ git branch <branch name>
This command will create the branch B1 locally in Git directory.
List Branch
You can List all of the available branches in your repository by using the following command.
Either we can use git branch - list or git branch command tolisttheavailablebranchesinthe
repository.
Syntax:
$ git branch --list
or
$ git branch
Here, both commands are listing the available branches in the repository. The symbol * is
representing currently active branch.
Delete Branch
You can delete the specified branch. It is a safe operation. In thiscommand,Gitpreventsyou
from deleting the branch if it has unmerged changes. Below is the command to do this.
$ git branch -d<branch name>
Switch Branch
Git allows you to switch between the branches without making a commit. You can switch
between two branches withthegitcheckoutcommand.Toswitchbetweenthebranches,below
command is used:
$ git checkout<branch name>
Switch from master Branch
Youcanswitchfrommastertoanyotherbranchavailableonyourrepositorywithoutmakingany
commit.
Syntax:
$ git checkout <branch name>
Rename Branch
Wecanrenamethebranchwiththehelpofthegitbranchcommand.Torenameabranch,usethe
below command:
Syntax:
$ git branch -m <old branch name><new branch name>
Merge Branch
Git allows you to merge the otherbranchwiththecurrentlyactivebranch.Youcanmergetwo
branches with the help of git merge command. Below command is used to merge the branches:
Syntax:
$ git merge <branch name>
2.Stashing
Sometimesyouwanttoswitchthebranches,butyouareworkingonanincompletepartofyour
currentproject.Youdon'twanttomakeacommitofhalf-donework.Gitstashingallowsyouto
do so. The git stash command enables you to switch branches without committingthecurrent
branch.
The below figure demonstrates the properties and role of stashing concerning repository and
working directory.
Git Stash
Generally,thestash'smeaningis"storesomethingsafelyinahiddenplace."ThesenseinGitis
also the same for stash; Git temporarily saves your data safely without committing.
Stashingtakesthemessystateofyourworkingdirectory,andtemporarilysaveitforfurtheruse.
Many options are available with git stash. Some useful options are given below:
Git stash
Git stash save
Git stash list
Git stash apply
Git stash changes
Git stash pop
Git stash drop
Git stash clear
Git stash branch
3. Rebasing,
Git Rebase
Rebasing is a process to reapply commits on top of another base trip. It is used to apply a
sequenceofcommitsfromdistinctbranchesintoafinalcommit.Itisanalternativeofgitmerge
command. It is a linear process of merging.
In Git, the term rebase is referred to as the process of moving or combining a sequence of
commits to a newbasecommit.Rebasingisverybeneficialanditvisualizedtheprocessinthe
environment of a feature branching workflow.
It is good to rebase your branch before merging it.
Git Rebase
Generally,itisanalternativeofgitmergecommand.Mergeisalwaysaforwardchangingrecord.
Comparatively, rebase is a compelling history rewriting tool in git. It merges the different
commits one by one.
Suppose you have made three commits in your master branch and three in your other branch
namedtest.Ifyoumergethis,thenitwillmergeallcommitsinatime.Butifyourebaseit,then
it will be merged in a linear manner. Consider the below image:
Git Rebase
The above image describes how git rebase works. Thethreecommitsofthemasterbranchare
merged linearly with the commits of the test branch.
Merging is the most straightforward way to integrate the branches. It performs a three-way
merge between the two latest branch commits.
How to Rebase
Whenyoumadesomecommitsonafeaturebranch(testbranch)andsomeinthemasterbranch.
You can rebase any of these branches. Use the git logcommandtotrackthechanges(commit
history).Checkouttothedesiredbranchyouwanttorebase.Nowperformtherebasecommand
as follows:
Syntax:
$git rebase <branch name>
4. Reverting and Resetting.
InGit,thetermrevertisusedtorevertsomechanges.Thegitrevertcommandisusedtoapply
revertoperation.Itisanundotypecommand.However,itisnotatraditionalundoalternative.It
does not delete any data in this process; instead, it willcreateanewchangewiththeopposite
effect and thereby undo the specified commit. Generally, git revert is a commit.
im: Study and implementation of various git commands to push and pull arepositoryfrom
A
GitHub
Objective:
1. To Create a free GitHub account and a GitHub repository.
2. To study PUSH & PULL git commands
HEORY:
T
GitHub is an online software development platform. It's used for storing, tracking, and
collaborating on software projects. It makes it easy for developers to share code files and
collaborate with fellow developers on open-source projects. GitHub also serves as a social
networking site where developers can openly network, collaborate, and pitch their work.
it Push Command
G
The git push command uploadscontentfromalocalrepositorytoaremoterepository.Pushing
refers to the process of moving commits from one repository to another. Pushing is the
equivalent of git fetch, except that instead of importing commits to a local branch, it exports
commits to an external branch.
he Git push command is used to push the local repository content to a remote repository. After a local
T
repository has been modified, a push is executed to share the modifications with remote team members.
Pushing is the way commits are transferred from the local repository to the remote repository.
USH
P
1. Open Git Bash in your Desktop/system and configuring it with a user name and email ID.
$ Git config --global user.name "jneccse"
$ Git config --global user.email [email protected]
$ Git config –list
2. Check current working directory
$ pwd
To change the path use cd command ($ cd path_name)
3.To create a repository in the working directory,use the following commands:
$ mkdir git_demo
$ cd git_demo
$pwd
We will now initialize a repository to our folder.
$ git init
Something called the "master" appears on the screen. Whenever a Git repository is created for
the first time, it creates a branch, and the name of the branch is master. Navigate to the folder;
you can find a hidden ".git" folder.
I f you check the folder, you can see several directories and configurations. Make sure you don't
make any changes to any of the directories.
.git hidden folder is created when a repository is initialized.
4. Create text files in folder git_demo with commands
$ touch abc.txt
$ notepad abc.txt
Type “Hello DevOps Lab” inside file with the notepad. Save it & Close.
5. Check status of file
$ git status
Output shows that there isn't a file committed yet, and there are untracked files. The untracked
files can be seen in red.
For Git to track that file, add command is used. If you know the exact name of the file, you can
specify it and simply type the following command:
$ git add .
6. Commit the file
$ git commit -m "abc"
Lets check the status of file again
$ git status
You'll notice that there are no more commits to be made, as there was a single notepad and that
was committed in the previous step.
Next, check all the information regarding the commits that were made.
$ git log
This displays the commit ID, author's name, and email ID used. You can also find the date and
commit message on the screen.
7. Add the URL copied, which is your remote repository to where your local content from your
repository is pushed
ow, let's push the notepad file on GitHub. Open your GitHub account, and create a new repository. The
N
name of the repository will be "Git_Demo."
opy the repository URL from GitHub account and Paste the copied URL onto the Git Bash.
C
The HTTPS or URL is copied from the given GitHub account, which is the place of the remote
repository.
$ git remote add origin 'your_url_name'
it PULL Command
G
The git pull command is used to fetch and download content from a remote repository and
immediatelyupdatethelocalrepositorytomatchthatcontent.Mergingremoteupstreamchanges
into yourlocalrepositoryisacommontaskinGit-basedcollaborationworkflows.Thegitpull
commandisactuallyacombinationoftwoothercommands,gitfetchfollowedbygitmerge.In
thefirststageofoperationgitpullwillexecuteagitfetchscopedtothelocalbranchthatHEAD
ispointedat.Oncethecontentisdownloaded,gitpullwillenteramergeworkflow.Anewmerge
commit will be-created and HEAD updated to point at the new commit.
he pull command is used to access the changes (commits)from a remote repository to the local
T
repository.Itupdatesthelocalbrancheswiththeremote-trackingbranches.Remotetrackingbranchesare
branchesthathavebeensetuptopushandpullfromtheremoterepository.Generally,itisacollectionof
the fetch and merges command. First, it fetches the changes fromremoteandcombinedthemwiththe
local repository.
yntax:
S
$ git pull < remotebranch URL>
$ git pull <option> [<repository URL><refspec>...]
In which:
<option>: Options are the commands; these commands are used as an additional option in a
particular command. Options can be -q (quiet), -v (verbose), -e(edit) and more.
<repository URL>: Repository URL is your remote repository's URL where you have stored
your original repositories like GitHub or any other git service. This URL looks like:
https://github.com/jneccse/Git_Demo.git
To access this URL, go to your account on GitHub and select the repository you want to clone.
After that, click on the clone or download option from the repository menu. A new pop up
window will open, select clone with https option from available options.
Copy the highlighted URL. This URL is used to Clone the repository.
Refspec>: A ref is referred to commit, for example, head (branches), tags, and remote branches.
<
You can check head, tags, and remote repository in .git/ref directory on your local repository.
Refspec specifies and updates the refs.
OUTCOME:Use PUSH and PULL commands with git
CONCLUSIONS:
Learned the basics of the push & pull commandandfollowedahands-ondemoofthe
Git Push & pull command using Git Bash. In the demo, we saw how files from the local
repositorycouldbepushedtotheremoterepository.Theprocessmakesitpossiblefortheteam
to stay updated on different people performing different tasks in the same program.
Laboratory Task :
● Create account on GitHub
● Create repository on GitHub
● Create local repository
●
Clone your last year mini project from Github
●
● Create two branches for your mini project repository
● List all repositories
● Add new features to your project in both branches
● Merge the branches with master
● Push the local repository to GitHub
● Install GitLab, Bitbucket & Bazar
● Use different features of above tools
●
Journal Writeup:
Introduction
GitHub
Remote Repository
Create Remote Repository
Push Local Repository to Remote
Push a branch to GitHub
Pull from Remote to Local Repository
Git pull vs git fetch
Git remote repository management commands
Conclusion
Aim: -Creating simple Maven project and perform unit test and resolve dependencies
Objective:
1. Understanding Maven Project
HEORY:
T
Maven tutorial provides basic and advanced concepts of apache maven technology. Our maven
tutorial is developed for beginners and professionals.
Maven is a powerful project management tool that is based on POM (project object model). It is
used for projects build, dependency and documentation.
It simplifies the build process like ANT. But it is too much advanced than ANT.
Current version of Maven is 3.
hat it does?
W
Maven simplifies the above mentioned problems. It does mainly following tasks.
vn −version
m
Now it will display the version of maven and jdk including the maven home and java home.
Set up the project
First you’ll need to setup a Java project for Maven to build. To keep the focus on Maven, make
the project as simple as possible for now. Create this structure in a project folder of your
choosing.
ithin the src/main/java/hello directory, you can create any Java classes you want.
W
Create
HelloWorld.java and Greeter.java .
s rc/main/java/hello/HelloWorld.java
package hello;
s rc/main/java/hello/Greeter.java
package hello;
om.xml
p
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
https://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
groupId>org.springframework</groupId>
<
<artifactId>gs-maven</artifactId>
<packaging>jar</packaging>
<version>0.1.0</version>
<properties>
<maven.compiler.source>1.8</maven.compiler.source>
<maven.compiler.target>1.8</maven.compiler.target>
</properties>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<version>3.2.4</version>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>shade</goal>
</goals>
<configuration>
<transformers>
<transformer
implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer">
<mainClass>hello.HelloWorld</mainClass>
</transformer>
</transformers>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
</project>
With the exception of the optional <packaging> element, this is the simplest possible pom.xml
file necessary to build a Java project.
To try out the build, issue the following at the command line:
vn compile
m
This will run Maven, telling it to execute the compile goal. When it’s finished, you should find
the compiled .class files in the target/classes directory.
Since it’s unlikely that you’ll want to distribute or work with .class files directly, you’ll probably
want to run the package goal instead:
vn package
m
The package goal will compile your Java code, run any tests, and finish by packaging the code
up in a JAR file within the target directory. The name of the JAR file will be based on the
project’s <artifactId> and <version>. For example, given the minimal pom.xml file from before,
the JAR file will be named gs-maven-0.1.0.jar.
vn install
m
The install goal will compile, test, and package your project’s code and then copy it into the local
dependency repository, ready for another project to reference it as a dependency.
eclare Dependencies
D
The simple Hello World sample is completely self-contained and does not depend on any
additional libraries. Most applications, however, depend on external libraries to handle common
and complex functionality.
For example, suppose that in addition to saying "Hello World!", you want the application to print
the current date and time. While you could use the date and time facilities in the native Java
libraries, you can make things more interesting by using the Joda Time libraries.
src/main/java/hello/HelloWorld.java
package hello;
import org.joda.time.LocalTime;
I f you were to run mvn compile to build the project now, the build would fail because you’ve not
declared Joda Time as a compile dependency in the build. You can fix that by adding the
following lines to pom.xml (within the <project> element):
<dependencies>
<dependency>
<groupId>joda-time</groupId>
<artifactId>joda-time</artifactId>
<version>2.9.2</version>
</dependency>
/dependencies>
<
This block of XML declares a list of dependencies for the project. Specifically, it declares a
single dependency for the Joda Time library. Within the <dependency> element, the dependency
coordinates are defined by three sub-elements:
y default, all dependencies are scoped as compile dependencies. That is, they should be
B
available at compile-time (and if you were building a WAR file, including in the /WEB-INF/libs
folder of the WAR). Additionally, you may specify a <scope> element to specify one of the
following scopes:
p rovided - Dependencies that are required for compiling the project code, but that will be
provided at runtime by a container running the code (e.g., the Java Servlet API).
test - Dependencies that are used for compiling and running tests, but not required for building or
running the project’s runtime code.
ow if you run mvn compile or mvn package, Maven should resolve the Joda Time dependency
N
from the Maven Central repository and the build will be successful.
rite a Test
W
First add JUnit as a dependency to your pom.xml, in the test scope:
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.12</version>
<scope>test</scope>
</dependency>
Then create a test case like this:
src/test/java/hello/GreeterTest.java
package hello;
Test
@
public void greeterSaysHello() {
assertThat(greeter.sayHello(), containsString("Hello"));
}
}
Maven uses a plugin called "surefire" to run unit tests. The default configuration of this plugin
compiles and runs all classes in src/test/java with a name matching *Test. You can run the tests
on the command line like this
vn test
m
or just use mvn install step as we already showed above (there is a lifecycle definition where
"test" is included as a stage in "install").
p om.xml
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
https://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
groupId>org.springframework</groupId>
<
<artifactId>gs-maven</artifactId>
<packaging>jar</packaging>
<version>0.1.0</version>
<properties>
<maven.compiler.source>1.8</maven.compiler.source>
<maven.compiler.target>1.8</maven.compiler.target>
</properties>
<dependencies>
<!-- tag::joda[] -->
<dependency>
groupId>joda-time</groupId>
<
<artifactId>joda-time</artifactId>
<version>2.9.2</version>
</dependency>
<!-- end::joda[] -->
<!-- tag::junit[] -->
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.12</version>
<scope>test</scope>
</dependency>
<!-- end::junit[] -->
/dependencies>
<
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<version>3.2.4</version>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>shade</goal>
</goals>
<configuration>
<transformers>
<transformer
implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer">
<mainClass>hello.HelloWorld</mainClass>
/transformer>
<
</transformers>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
/build>
<
</project>
ONCLUSIONS:
C
Students will be able to build maven project with unit testing for resolving dependencies
bjective:
O
1. Understand the role of Jenkins
2. Install and Configure Jenkins
HEORY:
T
Jenkins is an open source continuous integration/continuous delivery anddeployment(CI/CD)
automation software DevOps tool written in the Java programming language. It is used to
implement CI/CD workflows, called pipelines.
Pipelines automate testing and reporting on isolated changes in alargercodebaseinrealtime
and facilitates the integration of disparate branches of the code into a main branch.Theyalso
rapidlydetectdefectsinacodebase,buildthesoftware,automatetestingoftheirbuilds,prepare
the code base for deployment (delivery), and ultimately deploy code to containers and virtual
machines, as well as bare metal and cloud servers. There are several commercial versions of
Jenkins. This definition only describes the upstream Open Source project.
History
Jenkins is a fork of a project called Hudson, which was trademarked by Oracle. Hudson was
eventually donated to the Eclipse Foundation and is no longer under development. Jenkins
development is now managed as an open source project under the governance of the CD
Foundation, an organization within the Linux Foundation.
Jenkins and CI/CD
Overtime,continuousdeliveryanddeploymentfeatureshavebeenaddedtoJenkins.Continuous
delivery is the process of automating the building and packaging of code for eventual
deployment to test, production staging, and production environments. Continuous deployment
automates the final step of deploying the code to its final destination.
Inbothcases,automationreducesthenumberoferrorsthatoccurbecausethecorrectstepsand
best practices are encoded into Jenkins. Jenkins describes a desired state and the automation
server ensures that that state is achieved. In addition, the velocityofreleasescanbeincreased
sincedeploymentsarenolongerboundedbypersonnellimitations,suchasoperatoravailability.
Finally, Jenkins reduces stress on the development and operationsteam,byremovingtheneed
for middle of the night and weekend rollouts.
How Jenkins works
JenkinsrunsasaserveronavarietyofplatformsincludingWindows,MacOS,Unixvariantsand
especially, Linux. It requires a Java 8 VM and above and can be run on the Oracle JRE or
OpenJDK.Usually,JenkinsrunsasaJavaservletwithinaJettyapplicationserver.Itcanberun
on other Java application servers such as Apache Tomcat. More recently, Jenkins has been
adaptedtoruninaDockercontainer.Thereareread-onlyJenkinsimagesavailableintheDocker
Hub online repository.
To operate Jenkins, pipelines are created. ApipelineisaseriesofstepstheJenkinsserverwill
take to perform the required tasks of the CI/CD process. These are stored in a plain text
Jenkinsfile.TheJenkinsfileusesacurlybracketsyntaxthatlookssimilartoJSON.Stepsinthe
pipeline are declared as commands with parameters and encapsulated in curly brackets. The
JenkinsserverthenreadstheJenkinsfileandexecutesitscommands,pushingthecodedownthe
pipeline from committed source code to production runtime. A Jenkinsfile can be created
through a GUI or by writing code directly.
Plugins
A plugin is an enhancement to the Jenkins system. They help extend Jenkins capabilities and
integrated Jenkins with other software. Plugins can be downloaded from the online Jenkins
Plugin repository and loaded using the Jenkins Web UI or CLI. Currently, the Jenkins
community claims over 1500 plugins available for a wide range of uses.
Plugins help to integrate other developer tools into the Jenkins environment, add new user
interface elements to the Jenkins Web UI, help with administration of Jenkins, and enhance
Jenkins for build and sourcecodemanagement.Oneofthemorecommonusesofpluginsisto
provide integration points for CI/CD sources and destinations. These include software version
control systems (SVCs) such as Git and Atlassian BitBucket, container runtime systems --
especiallyDocker,virtualmachinehypervisorssuchasVMwarevSphere,publiccloudinstances
including Google Cloud Platform and Amazon AWS, and private cloud systems such as
OpenStack. There are also plugins that assist in communicating with operating systems over
FTP, CIFS, and SSH.
bjective
O
1. Understand build job in Jenkins
THEORY:
J enkins is an open-source server used for automating software development and deployment. It
allows developers to create repeatable jobs that contain all the code necessary to build an
application.
What Is a Build Job?
A Jenkins build job contains the configuration for automating a specific task or step in the
application buildingprocess.Thesetasksincludegatheringdependencies,compiling,archiving,
or transforming code, and testing and deploying code in different environments.
Jenkins supports several types of build jobs, such as freestyle projects, pipelines,
multi-configuration projects, folders, multibranch pipelines, and organization folders.
What is a Jenkins Freestyle Project?
J enkinsfreestyleprojectsallowuserstoautomatesimplejobs,suchasrunningtests,creatingand
packaging applications, producing reports, or executing commands. Freestyle projects are
repeatable and contain both build steps and post-build actions.
Even though freestyle jobs are highly flexible, they support a limited numberofgeneralbuild
and post-build actions. Any specialized or non-typicalactionauserwantstoaddtoafreestyle
project requires additional plugins.
2 . Enter the new project's name in the Enter an item namefield and select the F
reestyle
projecttype. Click O Kto continue.
3. Under the G
eneraltab, add a project description in the D
escriptionfield.
ote:NeedacheapsandboxingenvironmentwithautomatedOS-deploymentoptions?Seehow
N
easy it is to d eploy a development sandboxfor aslittle as $0.10/hour.
Step 3: Build the Project
1. Click the Build Nowlink on the left-hand sideof the new project page.
2. Click the link to the latest project build in the B
uildHistorysection.
3 . Click the C
onsole Outputlink on theleft-handsidetodisplaytheoutputforthecommands
you entered.
4 .TheconsoleoutputindicatesthatJenkinsissuccessfullyexecutingthecommands,displaying
the current version of Java and Jenkins working directory.
CONCLUSION:
bjective:
O
1. Understand the Docker Image and Dockerfile
THEORY:
ocker is anopenplatformfordeveloping,shipping,andrunningapplications.Dockerenables
D
you to separate your applications from yourinfrastructuresoyoucandeliversoftwarequickly.
With Docker, you can manage your infrastructure in the same ways you manage your
applications. By taking advantage of Docker's methodologies for shipping, testing, and
deploying code, you can significantly reduce the delay betweenwritingcodeandrunningitin
production.
The Docker platform
Dockerprovidestheabilitytopackageandrunanapplicationinalooselyisolatedenvironment
called a container. Theisolationandsecurityletsyourunmanycontainerssimultaneouslyona
given host. Containers are lightweightandcontaineverythingneededtoruntheapplication,so
youdon'tneedtorelyonwhat'sinstalledonthehost.Youcansharecontainerswhileyouwork,
and be sure that everyone you share with gets the same container that works in the same way.
Docker provides tooling and a platform to manage the lifecycle of your containers:
Develop your application and its supporting components using containers.
The container becomes the unit for distributing and testing your application.
Whenyou'reready,deployyourapplicationintoyourproductionenvironment,asacontaineror
an orchestrated service. This works the same whether your production environment is a local
data center, a cloud provider, or a hybrid of the two.
What is Docker daemon?
Docker daemon runs on the host operating system. It is responsible for running containers to
manage docker services. Docker daemon communicates with other daemons. It offers various
Docker objects such as images, containers, networking, and storage.
Docker architecture
Docker uses a client-server architecture. The Docker client talks to the Docker daemon,which
doestheheavyliftingofbuilding,running,anddistributingyourDockercontainers.TheDocker
client anddaemoncanrunonthesamesystem,oryoucanconnectaDockerclienttoaremote
Docker daemon. The Docker client and daemon communicate using a RESTAPI,overUNIX
sockets or a network interface. Another Docker client is Docker Compose, that lets youwork
with applications consisting of a set of containers.
ow to install docker on Windows
H
WecaninstalldockeronanyoperatingsystemlikeWindows,Linux,orMac.Here,wearegoing
toinstalldocker-engineonWindows.ThemainadvantageofusingDockeronWindowsisthatit
provides an ability to run natively on Windows without any kind of virtualization. To install
docker on windows, we need to download and install the Docker Toolbox.
Follow the below steps to install docker on windows -
Step 1: Click on the below link to download DockerToolbox.exe.
https://download.docker.com/win/stable/DockerToolbox.exe
Step2:OncetheDockerToolbox.exefileisdownloaded,doubleclickonthatfile.Thefollowing
window appears on the screen, in which click on the Next.
Step 3: Browse the location where you want to install the Docker Toolbox and click on the Next.
Step 4: Select the components according to your requirement and click on the Next.
Step 5: Select Additional Tasks and click on the Next.
Step 6: The Docker Toolbox is ready to install. Click on Install.
Step 7: Once the installation is completed, the Wizard appears on the screen, in which click on
the Finish.
Step 8: After the successful installation, three icons will appear on the screen that are: Docker
Quickstart Terminal, Kitematic (Alpha), and OracleVM VirtualBox. Double click on the Docker
Quickstart Terminal.
Step 9: A Docker Quickstart Terminal window appears on the screen.
To verify that the docker is successfully installed, type the below command and press enter key.
d ocker -version
Docker Container and Image
Dockercontainerisarunninginstanceofanimage.YoucanuseCommandLineInterface(CLI)
commandstorun,start,stop,move,ordeleteacontainer.Youcanalsoprovideconfigurationfor
the network and environment variables. Docker container is an isolatedandsecureapplication
platform, but it can share and access to resources running in a different host or container.
An image is a read-only template with instructions for creating a Docker container. A docker
image is describedintextfilecalledaDockerfile,whichhasasimple,well-definedsyntax.An
image does not have states and never changes. Docker Engine provides the core Docker
technology that enables images and containers.
You can understand container and image with the help of the following command.
bjective:
O
1. Understand the Docker and containers
HEORY:
T
A container is something that packagesyourcodealongwithanyotherdependenciessothatit
can be deployed across multiple platforms reliably.
Acontainerisastandardunitofsoftwarethatpackagesupcodeandallitsdependenciessothe
applicationrunsquicklyandreliablyfromonecomputingenvironmenttoanother.Containersare
built to include everything needed to run an application: code, runtime, system tools, system
libraries and settings.
This means a container is fully isolated from another eveniftheyareforthesameapplication
with similar dependencies.
These containers can be run locally on your Windows, Mac, and Linux. And major cloud
systemslikeAWSorAzuresupportthemoutofthebox.YoucanalsouseDockeronanyhosting
space where it can be installed and run.
DockerComposeprovidesawaytoorchestratemultiplecontainersthatworktogether.Examples
include a service that processes requests and a front-end web site, or a service that uses a
s upportingfunctionsuchasaRediscache.Ifyouareusingthemicroservicesmodelforyourapp
development, you can use Docker Compose to factor the app code into several independently
running services that communicate using web requests.
Docker Compose is a tool for defining and running multi-container Dockerapplications.With
Compose, you use a YAML file to configure your application’s servicesandstartthemwitha
singlecommand.Theunderlyingnetworkforyourservicecontainersisconfiguredautomatically
by Docker Compose.
Installation
ocker Compose relies on Docker Engine. So before installing it makesureyouhaveDocker
D
Engine installed on your system.
OndesktopsystemslikeDockerDesktopforMacandWindows,DockerComposeisincludedas
part of those desktop installs. You don’t need to install it manually.
On Linux systems, you’ll need to
1. Install Docker Engine
2 . Run the following command to download the current stable release of Docker Compose
services:
web:
image: nginx
ports:
- 8080:80
database:
image: redis
heabovecommandwilldeployboththewebanddatabasecontainersinattachedmode,soyou
T
won’tgetyourbashpromptreturned.Ifyouwanttorunthemindetachedmode,usethebelow
command:
docker-compose up -d
ow to see your application up and running, go to your browser and type the following
N
url http://localhost:8080
o stop all the running containers, use the following command:
T
docker-compose down
ONCLUSION:
C
Successfully Deployed container stack using Docker compose
THEORY:
Kubernetesisaportable,extensible,opensourceplatformformanagingcontainerizedworkloads
andservices,thatfacilitatesbothdeclarativeconfigurationandautomation.Ithasalarge,rapidly
growing ecosystem. Kubernetes services, support, and tools are widely available.
ThenameKubernetesoriginatesfromGreek,meaninghelmsmanorpilot.K8sasanabbreviation
results from counting the eight letters between the "K" and the "s". Google open-sourced the
Kubernetesprojectin2014.Kubernetescombinesover15yearsofGoogle'sexperiencerunning
production workloads at scale with best-of-breed ideas and practices from the community.
Deployment Model
ote:Follow our instructions to install WSL 2 onWindows. WSL 2 runs on top of Hyper-V,
N
offering the best performance. It features superior memory management and deeply integrates
with the rest of the Windows host.
3. Press the Close and log outbutton to completethe installation.
4. Log back into your user account, review the ServiceAgreement, check the I accept the
termsbox, and click A
cceptto complete the Dockerinstallation.
After accepting the agreement, the Docker GUI tool starts.
tep 3: Install Kubernetes
S
Docker comes with a GUI tool that allows users to configure Docker settings and install and
enable Kubernetes.
There are several methods for installing Kubernetes.This article will cover installing
Kubernetes via the Docker settings, Minikube, and Kind. Depending on your machine's
specifications, choose the method that suits your system:
● inikube requires at least 2GB of RAM and 2 CPUs.
M
● Kind requires 8GB of RAM to deliver good performance.
● Installing Kubernetes via D
ocker settingstakes upto 8 GB of RAM.
efore installing Kubernetes, install kubectl, theKubernetes CLI tool. This utility lets you run
B
commands against Kubernetes clusters.
Follow these steps to install kubectl:
1. Navigate to the official kubectl download pageandlocate the Install kubectl binarysection:
2 . Click the download link for the latest release. At the time of writing this article, the latest
release was 1.24.0. Save the file to a directory such as C:\kubectl.
3. Press the Windowsbutton and search for E
nvironmentvariables. Select E
dit the system
environment variables.
4. In the System Properties window, click E
nvironmentVariables…
5 . Under the S ystem Variablessection, click the Pathenvironment variable and select E ditto
add the kubectl system variable:
6. Click Newand add the path to the downloaded kubectl binaryfile. Select O
Kin all windows
to confirm the changes.
7. Check if everything is set up correctly by running kubectlinWindows PowerShell:
ia Docker GUI
V
The easiest way to install Kubernetes is by enabling it in Docker settings. Follow the steps below
to do so:
1. In the system tray, right-click the Docker icon.Select S
ettingsfrom the menu.
I mportant:If the Docker icon is missing from thesystem tray, reboot your system. If the
problem persists, check the official troubleshootingguide.
2. In Docker settings, select the Kubernetestab.Check the Enable Kubernetesbox and
click A pply & Restart.
3. When prompted, click Installto proceed.
4. The tool downloads the necessary cluster components and creates another VM in the
background. When the installation finishes, both the Docker and Kubernetes icons are green,
which means they are up and running:
Via Minikube
Minikube is an open-source tool for running Kubernetes. It works with Linux, Mac, and
Windows by running a single-node cluster inside a virtual machine on the local machine.
Follow the steps below to install Kubernetes via Minikube:
Install Using winget:
1. If you are using winget, the Windows package manager,install Minikube by running:
winget install minikube
The output shows when the installation finishes.
Docker -version
Step3:Afterthesuccessfulexecutionofallthecommandsofthesecondstep,wehavetoinstall
the curl command. The curl is used to send the data using URL syntax.
Now, install the curl by using the following command. In the installation, we have to type Y.
sudo apt-get install curl
Now, we have to download the add package key for Kubernetes by the following command:
sudo curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add
If you get an error from the above command, then it means your curl command is not
successfully installed, so first install the curl command, and again run the above command.
Now, we have to add the Kubernetes repositories by the following command:
sudo apt-add-repository "deb http://apt.kubernetes.io/ kubernetes-xenial main"
After the successful execution of the above command, we have to check any updates by
executing the following command:
sudo apt-get update
Step 4: After the execution of the above commands in the above steps, we have to installthe
components of Kubernetes by executing the following command:
Step 6: After the above command is successfully executed, we have to run the following
commands,whicharegivenintheinitializationofkubeadm.Thesecommandsareshowninthe
above screenshot. The following commands are used to start a cluster:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Step 7: In this step, we have to deploy the paths using the following command:
sudo kubectl apply -f
https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Step 8: After the execution of the above command, wehavetorunthefollowingcommandto
verify the installation:
sudo kubectl get pods --all-namespaces
If the output is displayed as shown in the above screenshot. It means that the Kubernetes is
successfully installed on our system.
OUTCOME:Install Kuberenetes
CONCLUSION:
Students will be able to install Kubernetes
xperiment 9: Installation of Kubernetes
E
Introduction
What is Kubernetes?
Orchestration & its benefits
Kubernetes Components
Diagram
Nodes
Pods
Control Plane
kube-apiserver
etcd
kube-scheduler
Installation of Kubernetes
What is Kubetctl?
Kubetctl commands
Conclusion
Objective:
1. Understand the basics of Kubernetes services
Pre-requisites:understating of how the Docker works,how the Docker images are created
THEORY:
ubernetes is a container management technology developed in Google lab to manage
K
containerizedapplicationsindifferentkindofenvironmentssuchasphysical,virtual,andcloud
infrastructure.Itisanopensourcesystemwhichhelpsincreatingandmanagingcontainerization
ofapplication.Thistutorialprovidesanoverviewofdifferentkindoffeaturesandfunctionalities
of Kubernetes and teaches how to manage the containerized infrastructure and application
deployment.
Kubernetesisaportable,extensible,opensourceplatformformanagingcontainerizedworkloads
andservices,thatfacilitatesbothdeclarativeconfigurationandautomation.Ithasalarge,rapidly
growing ecosystem. Kubernetes services, support, and tools are widely available.
ThenameKubernetesoriginatesfromGreek,meaninghelmsmanorpilot.K8sasanabbreviation
results from counting the eight letters between the "K" and the "s". Google open-sourced the
Kubernetesprojectin2014.Kubernetescombinesover15yearsofGoogle'sexperiencerunning
production workloads at scale with best-of-breed ideas and practices from the community.
What can Kubernetes do for you?
Withmodernwebservices,usersexpectapplicationstobeavailable24/7,anddevelopersexpect
todeploynewversionsofthoseapplicationsseveraltimesaday.Containerizationhelpspackage
software to serve these goals, enabling applications to be released and updated without
downtime.Kuberneteshelpsyoumakesurethosecontainerizedapplicationsrunwhereandwhen
you want, and helps them find the resources and tools they need to work. Kubernetes is a
production-ready, open source platform designed with Google's accumulated experience in
container orchestration, combined with best-of-breed ideas from the community.
Features of Kubernetes
ollowing are some of the important features of Kubernetes.
F
Continues development, integration and deployment
Containerized infrastructure
Application-centric management
Auto-scalable infrastructure
Environment consistency across development testing and production
Loosely coupled infrastructure, where each component can act as a separate unit
Higher density of resource utilization
Predictable infrastructure which is going to be created
Types of Kubernetes services
There are four types of Kubernetes services:
ClusterIP: ClusterIP is the default service that enables the communication of multiple pods
withinthecluster.Bydefault,yourservicewillbeexposedonaClusterIPifyoudon'tmanually
define it. ClusterIP can’t be accessed from the outside world. But, a Kubernetes proxycanbe
used to access your services. This service type is used for internal networking between your
workloads, while debugging your services, displaying internal dashboards, etc.
odePort:A NodePort is the simplest networking typeof all. It requires no configuration, and it
N
simply routes traffic on a random port on the host to a random port on the container. This is
suitable for most cases, but it does have some disadvantages:
You may need to use a reverse proxy (like Nginx) to ensure that web requests are routed
correctly.
You can only expose one single service per port.
Container IPs will be different each time the pod starts, making DNS resolution impossible.
The container cannot access localhost from outside of the pod, as there is no IP configured.
Nevertheless,youcanuseNodePortduringexperimentationandfortemporaryusecases,suchas
demos,POCs,andinternaltrainingtoshowhowtrafficroutingworks.Itisrecommended n
otto
use NodePort in production to expose services.
oadBalancer: LoadBalancer is the most commonly used service type for Kubernetes
L
networking. It is a standard load balancer service that runs on each pod and establishes a
connection to the outside world, either to networks like the Internet, or within your datacenter.
TheLoadBalancerwillkeepconnectionsopentopodsthatareup,andcloseconnectionstothose
thataredown.ThisissimilartowhatyouhaveonAWSwithELBs,orAzurewithApplication
Gateway.UpstreamsprovideLayer4routingforHTTP(S)traffic,whereasDownstreamsprovide
Layer 7 routing for HTTP(S) traffic.
You can route traffic on destination port number, protocol and hostname, or use application
labels. You can send almost any kind of traffic to thisservicetype,suchasHTTP,TCP,UDP,
Grpc, and more. Use this approach to expose your services directly.
RunaHelloWorldapplicationinyourcluster:CreatetheapplicationDeploymentusingthefile
above:
kubectl apply -f https://k8s.io/examples/service/access/hello-application.yaml
TheprecedingcommandcreatesaDeploymentandanassociatedReplicaSet.TheReplicaSethas
two Pods each of which runs the Hello World application.
Display information about the Deployment:
kubectl get deployments hello-world
kubectl describe deployments hello-world
Make a note of the NodePort value for the service. For example, in the preceding output, the
NodePort value is 31496.
List the pods that are running the Hello World application:
kubectl get pods --selector="run=load-balancer-example" --output=wide
The output is similar to this:
NAME READY STATUS ... IP NODE
hello-world-2895499144-bsbk5 1/1 Running ... 10.200.1.4 worker1
hello-world-2895499144-m1pwt 1/1 Running ... 10.200.2.5 worker2
GetthepublicIPaddressofoneofyournodesthatisrunningaHelloWorldpod.Howyouget
this address depends on how you set up your cluster.Forexample,ifyouareusingMinikube,
youcanseethenodeaddressbyrunningkubectlcluster-info.IfyouareusingGoogleCompute
Engine instances, you can use the gcloud compute instances list command to see the public
addresses of your nodes.
On your chosen node, create a firewall rule that allows TCP traffic on your node port. For
example, ifyourServicehasaNodePortvalueof31568,createafirewallrulethatallowsTCP
traffic on port 31568. Different cloud providers offer different ways of configuring firewall rules.
Use the node address and node port to access the Hello World application:
curl http://<public-node-ip>:<node-port>
here<public-node-ip>isthepublicIPaddressofyournode,and<node-port>istheNodePort
w
value for your service. The response to a successful request is a hello message:
Hello Kubernetes!
Cleaning up
To delete the Service, enter this command:
Objective:
1. Understand the installation and configuration process of Ansible
Pre-requisites:
THEORY:
Ansibleisanopen-source,cross-platformtoolforresourceprovisioningautomationthatDevOps
professionals popularly useforcontinuousdeliveryofsoftwarecodebytakingadvantageofan
“infrastructure as code” approach.
Ansible® is an open source IT automation engine that automates provisioning, configuration
management, application deployment, orchestration, and many other IT processes.
Ansible can be used to install software, automate daily tasks, provision infrastructure and
networkcomponents,improvesecurityandcompliance,patchsystems,andorchestratecomplex
workflows.
How does Ansible work?
odules
M
Ansible works by connecting to nodes (or hosts) and pushing out small programs—called
modules—to these nodes. Nodes are the target endpoints—servers, network devices, or any
computer—that you aim to manage with Ansible. Modules are usedtoaccomplishautomation
tasks in Ansible. These programs are written to be resource models of the desiredstateofthe
system. Ansible then executes these modules and removes them when finished.
Without modules, you’d have to rely on ad-hoc commands and scripting to accomplish tasks.
Ansiblecontainsbuilt-inmodulesthatyoucanusetoautomatetasks,oryoucanwritenewones
on your own. Ansible modules can be written in any language that can return JSON, such as
Ruby, Python, or bash. Windows automation modules can even be written in Powershell.
Agentless automation
Ansible is agentless, which means the nodes it manages do not require any software to be
installed on them. Ansible reads information about which machines you wanttomanagefrom
your inventory. Ansible has a default inventory file, but you can create your own and define
which servers you want to be managed.
Ansible uses SSH protocol to connect to servers and run tasks. By default, Ansible usesSSH
keyswithssh-agentandconnectstoremotemachinesusingyourcurrentusername.Rootlogins
are not required. You can log in as any user, and then use su or sudo commands as any user.
Once it has connected, Ansible transfers the modules required by your command or Ansible
Playbooktotheremotemachine(s)forexecution.Ansibleuseshuman-readableYAMLtemplates
so users can program repetitive tasks to happen automatically without having to learn an
advanced programming language.
Using Ansible for ad-hoc commands
YoucanalsouseAnsibletorunad-hoccommands,whichautomateasingletaskononeormore
managed nodes.Todothis,youwillneedtorunacommandorcallamoduledirectlyfromthe
command line. No playbook is used, andad-hoccommandsarenotreusable.Thisisfinefora
one-time task, but anything more frequent or complex will require the use of an Ansible
Playbook.
Ansible Playbooks
AnsiblePlaybooksareusedtoorchestrateITprocesses.AplaybookisaYAMLfile—whichuses
a.ymlor.yamlextension—containing1ormoreplays,andisusedtodefinethedesiredstateof
a system. This differs from an Ansible module, which is a standalone script that can be used
inside an Ansible Playbook.
I nstall Ansible on Windows
There are three ways to run Ansible on Windows 10:
● Cygwin
● Linux Virtual Machine
● Enabling Ubuntu on Windows 10
e shall explain all three methods of installing Ansible on Windows.
W
Method 1: Using Cygwin
Cygwin is a POSIX-compatible environment that lets you run tools and code designed for
Unix-like operating systems on Microsoft Windows.
Even though the default Cygwin installation contains hundreds of tools for Unix-based systems,
Ansible is not one of them. You must manually add Ansible during the installation process.
To install Ansible on Windows using Cygwin, follow these steps:
1. Download the Cygwin installation file. This fileis compatible with both the 32-bit and 64-bit
versions of Windows 10. It automatically installs the right version for your system.
2. Run the Cygwin installation file. On the starting screen of the installation wizard,
click N
extto continue.
3. Select I nstall from Internetas the download sourceand click Next.
4. In the Root Directoryfield, specify where youwant the application installed, then click Next.
5 . In the Local Package Directoryfield, select whereyou want to install your Cygwin packages,
then click N ext.
I f you are using a proxy, select Use System ProxySettingsor enter the proxy settings manually
with the Use HTTP/FTP Proxy.
lick Nextto continue.
C
7. Choose one of the available mirrors to download the installation files, then click Next.
1 0. The install wizard will download and install all the selected packages, including Ansible.
11. Once the installation is complete, select whether you want to add a Cygwin desktop and Start
Menu icon, then click on Finishto close the wizard.
OUTCOME:Install Ansible
CONCLUSION:
Students will be able to install ansible
Objective:
1. Understand the use of Nagios for continuous monitoring
Pre-requisites:
THEORY:
What is Continuous Monitoring
Continuousmonitoringstartswhenthedeploymentisdoneontheproductionservers.Fromthen
on, this stage is responsible tomonitoreverythinghappening.Thisstageisverycrucialforthe
business productivity.
There are several benefits of using Continuous monitoring −
It detects all the server and network problems.
It finds the root cause of the failure.
It helps in reducing the maintenance cost.
It helps in troubleshooting the performance issues.
It helps in updating infrastructure before it gets outdated.
It can fix problems automatically when detected.
It makes sure the servers, services, applications, network is always up and running.
It monitors complete infrastructure every second.
What is Nagios
agios isanopen-sourceappformonitoringsystems,networks,andITinfrastructure.Thetool
N
allows users to track the state and performance of:
● Hardware (routers, switches, firewalls, dedicatedservers, workstations, printers, etc.).
● Networks.
● Apps.
● Services.
● Business processes.
● Operating systems (Windows, L inux, Unix, and OSX).
Nagiosrunsperiodicchecksoncriticalthresholdsandmetricstomonitorforsystemchangesand
potential problems. If the softwarerunsintoanissue,thetoolnotifiesadminsandcanalsorun
automatic scripts to contain and remedy the situation.
You can use Nagios to monitor:
● Memory and disk usage.
● CPU loads.
● The number of running processes.
● Log files.
● ystem availability.
S
● Response times.
● URL and content monitoring metrics.
● Services and network protocols (SMTP, POP3, HTTP,etc.).
The tool is available in two main variants:
● Nagios Core: T he free version of the software that allows users to track all essential
metrics.
● Nagios XI:A paid, extended version of Core that provides advanced components and
tools for monitoring.
This software is a common tool of choice in DevOps circles due to the solution's scalability,
efficiency, and flexibility.
Why Nagios
Nagios offers the following features making it usable by a large group of user community −
It can monitor Database servers such as SQL Server, Oracle, Mysql, Postgres
It gives application level information (Apache, Postfix, LDAP, Citrix etc.).
Provides active development.
Has excellent support form huge active community.
Nagios runs on any operating system.
It can ping to see if host is reachable.
Benefits of Nagios
Nagios offers the following benefits for the users −
It helps in getting rid of periodic testing.
It detects split-second failures when the wrist strap is still in the “intermittent” stage.
It reduces maintenance cost without sacrificing performance.
It provides timely notification to the management of control and breakdown.
agios Architecture
N
The following points are worth notable about Nagios architecture −
N
● agios has server-agent architecture.
● Nagios server is installed on the host and plugins are installed on the remote hosts/servers
which are to be monitored.
● Nagios sends a signal through a process scheduler to run the plugins on the local/remote
hosts/servers.
● Plugins collect the data (CPU usage, memory usage etc.) and sends it back to the
scheduler.
● Then the process schedules send the notifications to the admin/s and updates Nagios GUI.
The following figure shows Nagios Server Agent Architecture in detail −
. As you can see, some additional instructions appear on the screen. Run the following
6
command to install the init script in the /lib/systemd/system path:
$ sudo make install-init
7. Next, install and configure permissions on the directory:
$ sudo make install-commandmode
8. Finally, install sample config files in /usr/local/nagios/etc/:
sudo make install-config
$
Step 5: Set up Apache and Nagios UI
1. You need to enable the Apache module required for the Nagios web interface, so run
the following command:
$ sudo make install-webconf
$ sudo a2enmod rewrite cgi
$ sudo systemctl restart apache2
2. Type in the following command for the classic Nagios monitoring theme:
$ sudo make install-classicui
Step 6: Create the First Nagios User
We now need to create a user that can log in to Nagios. The following command creates
a user called nagadmin:
$ sudo htpasswd -c /usr/local/nagios/etc/htpasswd.users nagadmin
You need to provide a password for the user and confirm it (by default, passwords are
stored in /usr/local/nagios/etc/htpasswd.users).
Step 7: Install Nagios Plugins
Look at the latest available plugins at the official repository (at the time of writing this
article, the newest released version is 2.3.3).
1. To download plugins, type the following command:
$ VER="2.3.3"
$ curl -SL
https://github.com/nagios-plugins/nagios-plugins/releases/download/release-$VER/na
gios-plugins-$VER.tar.gz | tar -xzf -
2. This command creates a new directory (nagios-plugins-2.3.3) in your current working
directory. To install plugins, you first need to navigate to the new directory:
$ cd nagios-plugins-2.3.3
3. Now compile the plugins from source:
$ ./configure --with-nagios-user=nagios --with-nagios-group=nagios
$ sudo make install
4. To make sure all configurations are in order, run the following command:
$ sudo /usr/local/nagios/bin/nagios -v /usr/local/nagios/etc/nagios.cfg
Step 8: Start the Nagios Daemon
1. The last step is to start the Nagios service, which we achieve with the following
command:
$ sudo systemctl enable --now nagios
2. To make sure the tool is running, use the following command:
$ sudo systemctl status nagios
3. You can now access the tool by opening your browser and navigating to the
http://server-IP/nagios URL.
4. Onceprompted,typeinthecredentialsdefinedinstep6tosigninandyouarereadytostart
monitoring.