Linux Unlocked
Linux Unlocked
Page | 2
About the Author
Page | 3
References
This book is built on the collective knowledge and contributions of the global Linux
community, as well as various technical resources that have provided invaluable insights
into the world of Linux. Below is a list of references that informed the content of this book:
1. Linux Documentation Project
Official documentation for various Linux distributions and tools.
https://www.tldp.org
2. GNU Project
Resources on the GNU operating system, which forms the foundation of Linux
distributions.
https://www.gnu.org
3. Linux Kernel Archives
Comprehensive resources and updates about the Linux kernel.
https://www.kernel.org
4. Linux Man Pages
A detailed reference for Linux commands, configuration, and programming interfaces.
https://man7.org/linux/man-pages/
5. Red Hat Documentation
Enterprise-level insights into Linux administration and usage.
https://access.redhat.com/documentation
6. Ubuntu Documentation
A user-friendly resource for learning Ubuntu and related tools.
https://help.ubuntu.com
7. Arch Wiki: Advanced insights and guides for the Arch Linux distribution and Linux in
general. https://wiki.archlinux.org
8. Stack Overflow
A platform for developers and Linux enthusiasts to ask questions and share solutions.
https://stackoverflow.com
9. Security Resources
Page | 4
O'Reilly's Linux Security Cookbook
Online resources such as https://linuxsecurity.com
10. Books and Publications
Michael Kerrisk, The Linux Programming Interface
Evi Nemeth et al., *UNIX and Linux System Administration
Page | 5
Table of Contents
Page | 6
20 Preparing for Certification and Career in Linux 205
21 Useful Commands Cheat Sheet 214
Page | 7
Preface
Page | 8
only use Linux but to contribute to its thriving ecosystem in your own
unique way.
I would like to extend my gratitude to the incredible Linux community,
whose contributions have made this book possible. To the readers, I
welcome you to the world of Linux, a journey of endless learning and
discovery.
Welcome to Linux. Let’s get started.
S. Rathore
Author
Page | 9
Chapter 1: Introduction to Linux
Page | 10
Linux). It's also widely used in the IT industry for everything
from system administration to software development.
Page | 11
2010s: The rise of Android made Linux the dominant OS in
mobile computing, further expanding Linux's presence
worldwide.
Page | 12
CentOS: A Red Hat Enterprise Linux (RHEL)-based
distribution, CentOS is favored by enterprise environments
due to its stability and long-term support. It’s widely used for
servers.
Page | 13
forensics. Some of the most well-known tools include
Metasploit, Nmap, Wireshark, and Aircrack-ng.
Customizable for Security Audits: Kali allows deep
customization for conducting security audits and
vulnerability assessments on different networks and
systems.
Live and Persistent Modes: Kali can be run as a Live USB,
meaning you don’t have to install it to use it. You can even
create a persistent storage on the USB for saving
configurations and data between sessions.
Ideal for Cybersecurity: If you’re interested in becoming a
penetration tester or ethical hacker, Kali Linux is the tool you
need. Its wide variety of security tools makes it the most
comprehensive platform for ethical hacking.
Page | 14
Linux offers a minimal installation that allows users to build
their system from the ground up.
1. Linux:
Open-source, customizable, and security-focused.
Used in everything from personal desktops to enterprise
servers and cloud computing.
Available in multiple distributions (Ubuntu, CentOS, Kali
Linux, etc.).
2. Windows:
Closed-source and proprietary.
Extremely popular for personal desktops and gaming.
Supports a vast range of third-party applications but is
more vulnerable to viruses and malware.
3. macOS:
Unix-based and closed-source.
Known for its sleek interface and integration with Apple
hardware.
Page | 15
Favored by creative professionals, such as designers and
video editors.
4. Kali Linux:
Specialized distribution focused on penetration testing
and cybersecurity.
Comes pre-installed with a variety of security tools.
Used by ethical hackers to test vulnerabilities in
systems and networks.
Kali is not a general-purpose OS but a tool for security
professionals.
Conclusion
Page | 16
We’ve highlighted distributions like Ubuntu, CentOS, and Fedora,
but one of the most important and specialized distributions to
mention is Kali Linux. Kali is the go-to choice for anyone interested
in penetration testing, cybersecurity, and ethical hacking.If you
are interested in pursuing a career in ethical hacking or
cybersecurity, Kali Linux will be your primary tool. This chapter
has laid the foundation for understanding how Linux works, its
distributions, and the differences between them. The next
chapters will delve into more advanced topics, including setting up
your environment, mastering Linux commands, and
understanding system administration, networking, and security.
**************************************************************************************
Page | 17
Chapter 2: Getting Started with Linux
A Dual Boot setup allows you to install both Linux and your
existing operating system (like Windows) on the same machine. By
setting up a Dual Boot, you can choose which OS to use when you
start your computer. This method provides a flexible way to use
Linux alongside another OS without sacrificing your previous
setup.
Before you begin installing Linux, it's always best practice to back
up your important files. Changing system settings and
partitioning the hard drive can be risky, so ensuring that your data
is safe will give you peace of mind.
Page | 18
How to Backup? You can backup your files by copying them
to an external hard drive, cloud storage (like Google Drive or
Dropbox), or another partition on the same hard drive.
Next, you need to create free space on your disk for the Linux
installation. This will involve shrinking an existing partition (most
commonly the Windows partition) to make room for
Linux.Resizing Partitions (Windows): If you’re using Windows
and want to dual-boot with Linux, you’ll need to shrink your
Windows partition to free up space for Linux. Follow these steps:
Page | 19
After shrinking, you will see Unallocated Space on
your disk. This space will be used for the Linux
installation.
Page | 20
iii. Make sure the partition scheme is set to GPT if your
system is UEFI-based, or MBR if your system is legacy
BIOS-based.
iv. Click Start to create the bootable USB.
Now that you have a bootable USB, it’s time to boot your computer
from the USB drive to begin the installation of Linux.
Insert the USB Drive: Plug the bootable USB into your
computer and restart it.
Access the BIOS/UEFI Settings:
o To boot from the USB, you need to access your system’s
BIOS or UEFI settings.
o This is usually done by pressing a specific key during
startup, such as F2, F10, ESC, or DEL. The key varies
depending on the manufacturer of your system. You can
check your system's documentation or look for a
message that tells you which key to press for Boot
Options.
Change Boot Order:
o Once inside the BIOS/UEFI settings, navigate to the Boot
menu.
Page | 21
o Set the USB device as the first boot option. Save and exit
the BIOS settings. Your system will restart and boot from
the USB stick.
5. Install Linux
Once your system boots into the Linux installer, you can begin the
installation process. The installer will guide you step-by-step
through the process.
Page | 22
the size of your RAM (e.g., 4GB of RAM means a 4GB
swap).
o Home Partition (optional): If you want to keep your
personal files separate from the operating system files,
you can create a separate /home partition.
Example partitioning:
Page | 24
After successfully installing Linux on your system, there are a few
important post-installation steps you should take to ensure
everything works smoothly. These include installing drivers for
hardware such as graphics cards, Wi-Fi adapters, printers, and
other peripherals. Here's a step-by-step guide to help you with
these tasks:
This will show you the CPU architecture, model, cores, and
other relevant information.
Page | 25
lspci | grep -i vga
Page | 26
3. Install Graphics Drivers
sudo reboot
Page | 27
For Broadcom Wi-Fi adapters:
sudo apt install bcmwl-kernel-source
sudo reboot
If you are using a printer, you can install CUPS (Common UNIX
Printing System) for managing printers:
After installing CUPS, you can add printers via the CUPS web
interface (http://localhost:631) or using terminal commands.
Page | 28
sudo apt install alsa-base alsa-utils
sudo reboot
Once all drivers have been installed, restart your system to make
sure all changes are properly applied:
sudo reboot
Conclusion
Page | 29
more about open-source software, this dual boot setup will allow
you to explore and use Linux without giving up your existing OS.
But this is only the start of your journey. Linux is more than just an
operating system—it's a powerful tool for learning, managing
systems, securing networks, and automating tasks. In the
upcoming chapters, we will dive deeper into the Linux file system,
terminal commands, and system administration skills. We will
guide you step-by-step through the advanced features that make
Linux a favourite for professionals and enthusiasts worldwide.
**************************************************************************************
Page | 30
Chapter 3: Linux File System Structure
Page | 31
/usr: Contains read-only user data, including programs and
libraries.
/bin: Contains essential binary files (programs) that are
needed for the system to boot and run.
/var: Contains files that are expected to change frequently,
such as log files, caches, and spools.
/tmp: A temporary directory used by applications to store
files that don’t need to persist.
Example:
/home/arjun
├── Documents
│ └── Report.txt
├── Downloads
└── Pictures
Page | 32
3.2 Linux is Case-Sensitive
1. File names
2. Commands
3. Directories
For example, you can have two different files with the same name
but different capitalization:
File.txt
file.txt
These are considered two distinct files by Linux because "F" and "f"
are different.
Let’s consider the example of a user named Arjun and two files:
Page | 33
1. /home/arjun/Documents/Report.txt
2. /home/arjun/Documents/report.txt
These are two different files, and Linux treats them as such because
the names differ by case. If you try to open Report.txt but type
report.txt in a command, Linux will not recognize it, since
"Report.txt" and "report.txt" are considered entirely separate files.
1. Mistyping Commands
Correct Command:
ls /home/arjun/Documents/Report.txt
Incorrect Command (case mismatch):
ls /home/arjun/Documents/report.txt
2. Capitalization in Directories
Page | 34
/home/arjun/Documents/Important/
/home/arjun/Documents/important/
Example:
If you're unsure of the case of a file and want to list files without
worrying about case sensitivity, you can use the ls command with
Page | 35
the -i option to display the inode (unique identifier) and check the
files:
ls -i /home/arjun/Documents/
This will show the files along with their inode numbers, which can
help you identify which file you're looking for.
Once you understand the structure of the Linux file system, you
can begin to work with files and directories. Below are some basic
commands that will help you manipulate files and directories.
Navigating Directories
Page | 36
List files and directories (ls): Use ls to list the contents of a
directory.
ls
Absolute Path: This is the full path starting from the root
directory (/). For example:
/home/arjun/Documents/Report.txt
Relative Path: This path is relative to the current directory
you're in. For example, if you're already in
/home/arjun/Documents, you can refer to Report.txt just as:
Report.txt
Example:
Page | 37
Absolute path:
/home/arjun/Documents/Report.txt
Relative path (if you're already inside
/home/arjun/Documents):
Report.txt
Summary
**************************************************************************************
Page | 38
Chapter 4: Working with Files and Directories
Creating Files
To create a file in Linux, you can use commands like touch or echo.
1. Using touch:
o The touch command is the simplest way to create an
empty file.
2. touch myfile.txt
Page | 39
Example Output: No output is shown on the screen when
using touch, but you can check if the file was created by listing
the files with ls:
ls
Result:
myfile.txt
3. Using echo:
o You can also create files and write some text to them
using the echo command.
4. echo "Hello, Linux!" > myfile.txt
cat myfile.txt
Result:
Hello, Linux!
Creating Directories
Page | 40
mkdir myfolder
ls
Result:
myfolder
mkdir -p parent/child/grandchild
Example Output: After running the ls command, you’ll see the full
directory structure:
ls -R
Result:
parent:
child
parent/child:
grandchild
Page | 41
4.2: Navigating Directories
Example Output: After using ls in the new directory, you'll see the
contents of myfolder:
ls
Result:
myfile.txt
Example Output: This brings you back to the directory you were in
before. Running ls will show the parent directory:
ls
Page | 42
Result:
myfolder
Copying Files
cp myfile.txt /home/arjun/Documents/
Page | 43
Example Output: When you check /home/arjun/Documents,
you’ll see myfile.txt there:
ls /home/arjun/Documents
Result:
myfile.txt
If you want to copy an entire directory and its contents, use the -r
(recursive) flag:
cp -r myfolder /home/arjun/Documents/
Moving Files
To move a file:
mv myfile.txt /home/arjun/Documents/
ls
Result:
Page | 44
ls /home/arjun/Documents
Result:
myfile.txt
To rename a file:
mv myfile.txt newfile.txt
Example Output: Running ls will show the new file name:
ls
Result:
newfile.txt
Deleting Files and Directories
rm myfile.txt
Example Output: After running ls, myfile.txt will no longer be
present:
ls
Result:
rm -r myfolder
Example Output: Running ls will show that myfolder has been
deleted:
Page | 45
ls
Result:
ls -l myfile.txt
Example Output:
The first character (-) indicates it's a regular file (if it were a
directory, it would be d).
The next three characters (rw-) represent the owner’s
permissions (read and write).
The next three characters (r--) represent the group’s
permissions (read-only).
The final three characters (r--) represent others’ permissions
(read-only).
Page | 46
You can change file permissions using the chmod (change mode)
command.
Page | 47
sudo gives you superuser privileges, which are required to
install software.
apt is the package management tool.
install tells apt you want to install something.
leafpad is the name of the package you want to install.
Conclusion
**************************************************************************************
Page | 48
Chapter 5: User and Group Management
In Linux, users are the individual accounts that access the system,
and groups are collections of users who are given certain
permissions on files and resources. Understanding users and
groups is vital because it helps in controlling access to system files
and resources.
Page | 49
o The root user is the superuser with full privileges,
capable of executing all commands and accessing all
files.
Groups: Groups are used to organize users and grant them the
same permissions. Rather than granting permissions to each
user individually, you can grant permissions to a group,
making it easier to manage multiple users at once. Each user
can belong to one or more groups.
Linux uses the concept of a Primary Group (the group the user
is initially assigned to) and Secondary Groups (additional
groups the user can be a part of).
The useradd command is used to create a new user. Here's how you
can create a user named arjun:
Page | 50
Assigning a Password to the New User
When you run this command, the system will prompt you to enter
a new password for arjun:
arjun:x:1001:1001:Arjun User:/home/arjun:/bin/bash
arjun: Username
Page | 51
x: Password placeholder (the actual password is stored
elsewhere in an encrypted format)
1001: User ID (UID)
1001: Group ID (GID)
Arjun User: Full name (optional)
/home/arjun: Home directory
/bin/bash: Default shell
Modifying a User
Here, -s specifies the shell to be used by arjun. You can also change
the user’s home directory, username, or other settings using
usermod.
Deleting a User
Page | 52
5.3: Creating and Managing Groups
Creating a Group
This creates a new group called developers. You can confirm that
the group was created by checking the /etc/group file:
developers:x:1001:arjun
Page | 53
Adding a User to a Group
To add a user to a group, use the usermod command with the -aG
option. For example, to add arjun to the developers group:
To verify that the user has been added to the group, you can use the
groups command:
groups arjun
Output:
Deleting a Group
Page | 54
sudo groupdel developers
To grant a user sudo privileges, you must add the user to the sudo
group. For example, to add arjun to the sudo group:
Page | 55
The sudoers File
The sudoers file controls who can use sudo and what commands
they can run. To edit the sudoers file safely, use the visudo
command:
sudo visudo
This opens the sudoers file for editing in a safe environment. You
can add or modify user permissions here. For example, to allow
arjun to run all commands as root without entering a password,
you would add the following line:
This grants arjun full administrative access without the need for a
password when using sudo.
ls -l myfile.txt
Page | 56
Output:
Explanation:
Changing Ownership
Page | 57
To change the owner and group of a file, use the chown command.
For example, to change the ownership of myfile.txt to arjun and
group to developers:
This command changes both the file owner and the group.
Conclusion
**************************************************************************************
Page | 58
Chapter 6: File Permissions and Ownership
1. Read (r): Allows the user to open and read the file.
2. Write (w): Allows the user to modify or delete the file.
3. Execute (x): Allows the user to run the file as a program or
script.
Page | 59
Viewing File Permissions
You can view file permissions using the ls -l command. This shows
the details of files, including permissions, owner, group, and other
information.
ls -l myfile.txt
Output:
Breakdown of Permissions:
-rw-r--r--
| | | |
| | | +--> Permissions for Others (r--): Read-only for others
| | +--> Permissions for Group (r--): Read-only for the group
| +--> Permissions for Owner (rw-): Read and write for the owner
+--> File Type (-): Regular file
Owner (arjun): The file’s owner has read (r) and write (w)
permissions, but not execute (-).
Group (developers): The group has only read (r) permission.
Others: Others also have read (r) permission.
Page | 60
6.2: Changing File Permissions
You can change the permissions of files using the chmod (change
mode) command. There are two modes to change permissions:
Symbolic and Numeric.
r for read
w for write
x for execute
u for user (owner)
g for group
o for others
a for all users
Page | 61
chmod a+x myfile.txt
Read = 4
Write = 2
Execute = 1
Output:
This gives read, write, and execute permissions to the owner and
read and execute permissions to the group and others.
Page | 62
6.3: Changing File Ownership
You can change the owner and group of a file using the chown
command. The syntax is:
Page | 63
The setuid permission allows a program to run with the
permissions of the file owner, not the user executing it.
The output will show the setuid bit as an s in the owner's execute
position:
The output will show the setgid bit as an s in the group’s execute
position:
Sticky Bit
The sticky bit is used on directories to ensure that only the owner
of a file within the directory can delete or rename it, even if others
have write access.
Page | 64
To set the sticky bit on a directory:
The output will show the sticky bit as t at the end of the directory's
permissions:
ls -ld /home/arjun
Page | 65
Output:
d: Directory type.
rwx: Owner has read, write, and execute permissions.
r-x: Group has read and execute permissions.
r-x: Others have read and execute permissions.
This gives the owner and group full permissions (read, write,
execute) while others have only read and execute permissions.
Conclusion
Page | 66
How to manage file ownership with chown.
Special permissions: setuid, setgid, and the sticky bit.
Directory permissions and how to manage them.
**************************************************************************************
Page | 67
Chapter 7: Package Management
Page | 68
Handles Dependencies: Many programs rely on other
software to work correctly. Package managers automatically
install the required dependencies when you install a program.
Page | 70
Step 1: Update Your Package List
Before installing any new software, it’s important to update the list
of available packages and their versions. This ensures you’re
getting the latest updates.
Explanation:
Explanation:
Page | 71
Example: Installing Metasploit in Kali Linux
Sometimes, you may not know the exact name of the software you
want to install. You can search for it using APT.
Explanation:
Page | 72
Example: Searching for Nmap (Security Tool)
This will list all available versions of Nmap, along with details of
the package.
Explanation:
Page | 73
7.7: Upgrading Software
Explanation:
Page | 74
To check if Leafpad is installed:
Explanation:
If Leafpad is installed, you will see it listed with details about the
version.
Snap and Flatpak are package managers that work across multiple
Linux distributions, including Kali Linux, Ubuntu, Fedora, and
others. They allow you to install software that is packaged as a self-
contained bundle, making it easy to run the software regardless of
your distribution.
Page | 75
2. sudo apt install snapd # For Ubuntu and Debian-based
systems
3. sudo dnf install snapd # For Fedora and Red Hat-based
systems
4. Then, you can install software. For example, to install Spotify,
use:
5. sudo snap install spotify
1. Install Flatpak:
2. sudo apt install flatpak # For Ubuntu/Debian-based systems
3. sudo dnf install flatpak # For Fedora-based systems
4. Install software with Flatpak. For example, to install Steam:
5. flatpak install flathub com.valvesoftware.Steam
6.
Conclusion
Key takeaways:
Page | 76
You can search for, install, remove, and upgrade packages
using simple commands.
By mastering these tools, you can easily manage your Linux system
and keep it up to date, whether you are using Kali Linux for
security purposes, Ubuntu for general use, or any other Linux
distribution.
**************************************************************************************
Page | 77
Chapter 8: System Administration
Users are individuals who can access a Linux system, and the
system administrator controls their access. In Linux, a user can
have various roles and permissions that govern what they can and
cannot do.
Page | 78
1. Creating a New User
When you create a new user, the system creates a home directory
for them, assigns a default shell (usually Bash), and sets up basic
configurations.
Command:
After creating the user, you’ll want to set a password for them:
Page | 79
groupadd: Command to create a new group.
developers: The name of the group being created.
Delete a User:
Page | 80
sudo userdel -r arjun
Delete a Group:
Command:
ls -l myfile.txt
Example Output:
Page | 81
Explanation:
chmod +x myfile.txt
Page | 82
Example 3: Set Permissions with Numeric Values
Read (r) = 4
Write (w) = 2
Execute (x) = 1
Page | 83
Note: Process Management is briefly discussed in Chapter 4: Basic
Linux Commands, but here we will cover more administrative-
related process management tasks.
ps aux
Example output:
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME
COMMAND
arjun 1523 2.0 1.5 102344 8764 ? S 12:05 0:00 gnome-shell
Page | 84
2. Killing Processes
kill 1234
kill -9 1234
Note: Monitoring tools like top, df, and free have been discussed in
Chapter 6: Managing Resources in Linux. In this chapter, we’ll
focus more on how they relate to system administration tasks.
1. Memory Usage
Page | 85
free -h
Example output:
2. CPU Usage
top
The output will display processes, CPU usage, memory usage, and
more. Press q to quit.
Example output:
top - 12:10:45 up 1 day, 2:34, 3 users, load average: 0.25, 0.18, 0.16
Tasks: 184 total, 1 running, 183 sleeping, 0 stopped, 0 zombie
%Cpu(s): 4.0 us, 1.0 sy, 0.0 ni, 94.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
Page | 86
3. Disk Usage
df -h
The output will show the file systems, their sizes, usage, and
available space.
Example:
du -sh /home/arjun
Conclusion
**************************************************************************************
Page | 88
Chapter 9: Networking in Linux
1. IP Addressing
Example:
inet 192.168.1.5/24
Page | 89
192.168.1.5/24: The IP address 192.168.1.5 with subnet mask
/24 (255.255.255.0).
2. Subnet Mask
A subnet mask tells your computer which part of the IP address is
used for the network and which part can be assigned to hosts. The
subnet mask helps in splitting IP ranges into sub-networks.
For example:
3. Gateway
Example of Gateway:
Gateway: 192.168.1.1
Page | 90
DNS is used to resolve domain names (like google.com) into IP
addresses (like 8.8.8.8). It allows you to use human-readable
addresses instead of remembering numeric IP addresses.
nameserver 8.8.8.8
nameserver 8.8.4.4
ip a
Example Output:
Page | 91
inet 192.168.1.5/24: This shows your system's IP address
192.168.1.5 with a subnet mask of /24.
enp0s3: This is the network interface name. It could be
different depending on your system (eth0, wlan0, etc.).
ifconfig
Example Output:
enp0s3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>
mtu 1500
inet 192.168.1.5 netmask 255.255.255.0 broadcast
192.168.1.255
inet6 fe80::a00:27ff:fe9a:cbb6 prefixlen 64 scopeid
0x20<link>
ether 08:00:27:9a:cb:b6 txqueuelen 1000 (Ethernet)
ip route
Page | 92
Example Output:
cat /etc/resolv.conf
Example Output:
nameserver 8.8.8.8
nameserver 8.8.4.4
Page | 93
Example Configuration:
auto enp0s3
iface enp0s3 inet static
address 192.168.1.100
netmask 255.255.255.0
gateway 192.168.1.1
Page | 94
9.4: Networking Tools and Commands
1. Ping Command
ping google.com
Example Output:
2. Traceroute Command
traceroute google.com
Example Output:
Page | 95
traceroute to google.com (172.217.3.110), 30 hops max, 60 byte
packets
1 192.168.1.1 (192.168.1.1) 1.036 ms 0.734 ms 0.598 ms
2 10.10.10.1 (10.10.10.1) 10.379 ms 10.218 ms 10.014 ms
3 172.217.3.110 (172.217.3.110) 20.456 ms 19.678 ms 19.508
ms
Each hop shows the path the packet took from your computer
to Google's server.
3. Netstat Command
netstat -tuln
Example Output:
4. Ifconfig Command
Page | 96
The ifconfig command is deprecated but still useful on many
systems for network interface configuration and monitoring.
ifconfig
Example Output:
enp0s3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>
mtu 1500
inet 192.168.1.5 netmask 255.255.255.0 broadcast
192.168.1.255
inet6 fe80::a00:27ff:fe9a:cbb6 prefixlen 64 scopeid
0x20<link>
ether 08:00:27:9a:cb:b6 txqueuelen 1000 (Ethernet)
When you face network issues, you can use several tools to
diagnose the problem:
1.
ip a
Page | 97
ip route
ping google.com
cat /etc/resolv.conf
Conclusion
*****************************************************************************************
Page | 98
Chapter 10: Securing Your Linux System
Explanation:
Page | 99
sudo: This command is run with superuser privileges because
creating a new user requires administrative rights.
useradd: This command adds a new user to the system.
-m: This option tells the system to create a home directory for
the new user (in this case /home/arjun).
arjun: The username of the new user you're creating.
You should set a password for this user by using the passwd
command:
Page | 100
usermod: This command modifies an existing user's
properties.
-aG: The -a flag appends the user to the specified group, and G
specifies the group.
sudo: The group to which the user will be added.
arjun: The name of the user to whom you are granting access.
By adding the user arjun to the sudo group, they can execute
commands with administrative privileges.
Output:
Page | 101
o The first three characters are for the owner, the next
three are for the group, and the final three are for others.
In this example:
Symbolic Notation:
Numeric Notation:
r = 4, w = 2, x = 1
Page | 102
For example, if you want to set the permissions to rwx------ (read,
write, and execute for the owner only), you would use:
Explanation of 700:
Page | 103
To update your system, you can use apt (for Ubuntu, Kali Linux,
and Debian). Run the following commands to ensure your system
is up to date:
sudo apt update
update: This refreshes the list of available packages from the
repository.
sudo apt upgrade
Page | 104
sudo ufw enable
status: Displays the current state of the firewall and the active
rules.
Page | 105
SELinux uses policies to enforce security. On Red Hat, CentOS, and
Fedora-based systems, SELinux is enabled by default.
sestatus
sudo setenforce 1
sudo aa-status
Page | 106
Use the following command to monitor authentication logs:
tail -f: This command continuously shows the last few lines
of the file.
/var/log/auth.log: This is the log file that stores
authentication events, such as login attempts.
To find failed login attempts in your logs, use the grep command:
This filters out all the failed login attempts, helping you spot
potential attacks.
Page | 107
7.2 Incorrect Permissions
Conclusion
**************************************************************************************
Page | 108
Chapter 11: Managing Processes and Services
What is a Process?
Page | 109
Types of Process States:
Managing Processes
Example output:
Page | 110
2332 pts/0 00:00:00 bash 2655 pts/0 00:00:00 ps
2. `ps aux`:
This command shows all processes running on the system,
regardless of the terminal session.
bash
ps aux
Example output:
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME
COMMAND
root 1 0.0 0.2 16408 896 ? Ss 06:34 0:01 /sbin/init
user1 2675 0.0 0.1 13408 512 pts/0 S+ 07:05 0:00 ps aux
To install it:
Then run:
htop
Page | 111
Managing Process Lifecycle:
Stopping a Process:
kill 1234
kill -9 1234
killall firefox
Page | 112
1. Starting a Service:
2. Stopping a Service:
3. Restarting a Service:
Sometimes you need to restart a service to apply changes,
such as after modifying configuration files. Use:
sudo systemctl restart apache2
Page | 113
You can configure whether a service should start automatically
when the system boots using systemctl enable and systemctl
disable:
Enabling a Service:
Disabling a Service:
Linux uses systemd's journal to store logs for system services and
other important events. You can view these logs with journalctl.
Page | 114
sudo journalctl -u apache2
You can filter logs by time. For example, to view logs from
today:
crontab -l
crontab -e
* * * * * /path/to/command
Page | 115
Each * represents a specific time field (minute, hour, day,
month, and day of the week). For example, to run a script
every day at 2 AM:
0 2 * * * /path/to/backup.sh
1. Suspending a Process:
jobs
bg %1
Page | 116
System Boot Targets and Dependencies
Conclusion
**************************************************************************************
Page | 117
Chapter 12: Kernel and Module Management
Page | 118
1. Central Control Over Hardware: The kernel manages
communication with hardware components, including your
CPU, memory, storage devices, and network adapters. All
hardware devices interact with the kernel, and it makes sure
that these components can function together. For example,
when you plug in a USB device, the kernel ensures that the
system can recognize and interact with the hardware without
any issues.
Page | 119
kernel is managing processes and permissions. By
understanding kernel security mechanisms, you can better
secure your system from threats like malware or
unauthorized access.
Page | 120
Without a properly updated kernel, your system may not
recognize or interact with newer hardware. Knowing how to
check, update, and install kernel modules allows you to keep
your system compatible with new devices and peripherals.
Page | 121
drivers, and troubleshooting hardware issues—require knowledge
of the kernel. Additionally, you’ll have the confidence to interact
with and modify the system at a lower level, which is essential for
systems administration or any role that requires configuring
Linux servers or desktop environments.
Page | 122
o Example: When you open an application like a web
browser, the kernel creates a process for that application.
It then schedules this process to run on the CPU while
managing how it shares resources with other running
processes like your text editor or file manager.
2. Memory Management: The kernel is responsible for
managing your system’s memory. It allocates memory to
running processes and ensures that each process has enough
memory to function. It also manages virtual memory,
swapping data between RAM and disk when necessary to
prevent memory overloads.
o Example: When you open multiple programs, the kernel
allocates a portion of your RAM to each program. If you
run out of RAM, the kernel uses swap space on your hard
disk as temporary memory.
3. Device Management: The kernel is responsible for interacting
with all hardware devices, including hard drives, network
cards, and input devices like keyboards and mice. When you
plug in a new device (like a USB drive), the kernel loads the
necessary drivers to allow your system to interact with the
device.
o Example: When you plug a printer into your computer,
the kernel loads the printer driver to make the printer
functional. It also handles how the printer receives data
Page | 123
from applications, ensuring that documents are printed
properly.
4. Security and Permissions: The kernel enforces system
security by managing permissions. It controls access to files,
directories, and system resources based on user roles and file
permissions. It prevents unauthorized users from accessing
or modifying sensitive data.
o Example: When you try to open a file, the kernel checks
your user permissions to ensure you have the right to
read, write, or execute the file. If you don’t have the right
permissions, the kernel denies access.
Kernel Space: The region where the kernel runs with full
access to system resources.
User Space: The region where user applications and programs
run. These applications must request the kernel for system
resources like memory or CPU time.
Page | 124
Why Kernel Updates are Important
uname -r
Page | 125
2. sudo apt update
3. Upgrade installed packages, including the kernel:
4. sudo apt upgrade
5. Reboot your system to apply the update:
6. sudo reboot
Page | 126
sudo modprobe -r vfat
This command removes the module and frees up system
resources.
12.5: Troubleshooting Kernel-Related Issues
Kernel issues often manifest as system instability or hardware
malfunctions. Use tools like dmesg to view kernel logs, which
contain detailed error messages and diagnostic information.
Conclusion
In this chapter, you’ve learned about the vital role the kernel plays
in your Linux system. Understanding the kernel is essential for
maintaining a stable, secure, and efficient system. By learning how
to check and update your kernel, load/unload kernel modules, and
troubleshoot kernel-related issues, you're equipping yourself with
the skills to handle system problems, optimize your environment,
and improve performance. Whether you're using Ubuntu, Kali
Linux, or another distribution, mastering kernel management is
key to becoming a proficient Linux user or administrator.
**************************************************************************************
Page | 127
Chapter 13: Shell Scripting and Automation
Page | 128
ls: Lists the files in the current directory.
Example:
ls
cd: Changes the directory.
Example:
cd /home/arjun/Documents
rm: Removes files.
Example:
rm file.txt
Using Variables:
You can also get input from the user using the read command:
Page | 129
13.3: Control Structures in Bash
Conditionals:
Example of an If Statement:
if [ "$1" -eq 1 ]; then
echo "Arjun, you entered 1"
else
echo "Arjun, you didn’t enter 1"
fi
Loops:
Loops are used when you need to repeat actions. There are different
types of loops in bash.
Page | 130
For Loop: Used to repeat an action a specific number of times.
Example:
for i in {1..5}; do
echo "Arjun, this is loop number $i"
done
While Loop: Repeats an action as long as a certain condition is
true.
Example:
count=1
while [ $count -le 5 ]; do
echo "Arjun, count is $count"
((count++))
done
Functions:
Simple Function:
greet() {
echo "Hello, $1"
}
greet "Arjun"
greet "Alice"
Now that you know the basics of shell scripting, let's look at how to
automate tasks. One way to do this is by using cron jobs, which are
used to schedule tasks to run automatically at specific times.
How to Create a Cron Job: To add a cron job, use the crontab
command:
crontab -e
This will open the cron editor, where you can add your
scheduled tasks.
Page | 132
o Second *: Hour (0-23)
o Third *: Day of the month (1-31)
o Fourth *: Month (1-12)
o Fifth *: Day of the week (0-7, where 0 or 7 means Sunday)
0 0 * * * /home/arjun/backup.sh
1. Create the Script: Open the terminal and create a new file for
the script:
2. nano backup.sh
3. Write the Script:
4. #!/bin/bash
5. # This script creates a backup of Arjun's Documents folder
6.
7. # Define the source and destination paths
8. SOURCE_DIR="/home/arjun/Documents"
9. BACKUP_DIR="/home/arjun/backups"
Page | 133
10. # Get the current date to append to the backup folder
11. DATE=$(date +%Y-%m-%d)
12. # Create a new backup directory with the current date
13. mkdir -p "$BACKUP_DIR/backup-$DATE"
14. # Copy the contents of the Documents folder to the backup
folder
15. cp -r "$SOURCE_DIR"/* "$BACKUP_DIR/backup-$DATE/"
16. # Print a message indicating the backup is complete
17. echo "Arjun, backup completed successfully for $DATE!"
18. Save and Exit: Save the file and exit the editor (in nano,
press Ctrl + X, then Y to confirm, and Enter to save).
19. Make the Script Executable:
20. chmod +x backup.sh
21. Test the Script: To test the script manually, run:
22. ./backup.sh
Conclusion
In this chapter, you learned how to create and use shell scripts to
automate tasks in Linux. You also discovered basic shell
Page | 134
commands, how to use variables, control structures like if
statements and loops, and how to write functions.
By using cron jobs, you can schedule your shell scripts to run
automatically at specific times, like daily backups, making your
system management tasks much easier. The example backup script
you created will save time and effort by automating a repetitive
task.
As you get more comfortable with shell scripting, you can create
more complex scripts to automate various tasks, improve system
administration, and optimize your workflow. Practice with the
examples in this chapter, and start writing your own shell scripts
to automate your daily tasks.
**************************************************************************************
Page | 135
Chapter 14: Security and Hardening
Page | 136
Permissions: Each file or directory has three types of
permissions:
o Read (r): Grants the ability to open and read a file.
o Write (w): Grants the ability to modify a file.
o Execute (x): Grants the ability to run a file as a program
or access a directory.
Example:
A file may have permissions like -rw-r--r--, where:
Modifying Permissions:
Page | 137
Root User: The root account has complete access to the
system. It's essential to avoid using the root account for daily
activities.
Regular Users: Regular users should only be granted the
permissions they absolutely need for their work.
For example, a user working with text files should not have
administrative (root) access to the system.
Page | 138
SSH (Secure Shell) is the most common and secure method for
accessing a remote Linux system. It provides encryption for both
the communication channel and authentication.
To enable SSH access on a Linux system, you first need to install the
OpenSSH server package. On Debian-based systems like Ubuntu:
Page | 139
3. Copy Public Key to Server: To copy the public key to the
server and allow passwordless login, use:
4. ssh-copy-id user@hostname
5. Disable Password Authentication: After setting up SSH keys,
it's recommended to disable password-based authentication
for better security. In the /etc/ssh/sshd_config file, change:
6. PasswordAuthentication no
7. Restart SSH Service:
8. sudo systemctl restart ssh
Firewalls are vital in controlling which data can enter or leave the
system. Linux offers several firewall management tools, including
iptables, ufw, and firewalld.
Enable UFW:
sudo ufw enable
Allow SSH Traffic:
sudo ufw allow ssh
Check the Status of UFW:
sudo ufw status
Start firewalld:
sudo systemctl start firewalld
Allow SSH:
sudo firewall-cmd --permanent --add-service=ssh
Page | 141
sudo firewall-cmd --reload
SELinux uses contexts for files, processes, and ports. Each object is
labeled with a security context that defines what actions are
allowed.
14.4.2: AppArmor
Page | 142
AppArmor uses profiles to restrict the capabilities of programs.
When a program is launched, AppArmor checks its profile and
ensures that the program only performs the actions allowed by its
profile.
Page | 143
Set Password Expiration: To enforce users to change
passwords periodically, you can set the password expiration
policy:
sudo chage -M 30 username
Minimum Length and Complexity:
You can configure password complexity by editing
/etc/login.defs or /etc/pam.d/common-password files.
Update Packages:
sudo apt update && sudo apt upgrade
Install Fail2ban:
sudo apt install fail2ban
Start Fail2ban:
sudo systemctl start fail2ban
Page | 144
14.5.5: Use Two-Factor Authentication (2FA)
To enable 2FA for SSH, you can use tools like Google
Authenticator.
To implement Two-Factor Authentication (2FA) for SSH in Linux,
you will need to install and configure a 2FA tool like Google
Authenticator. This tool adds an extra layer of security by
requiring not only the user’s password but also a one-time code
generated by an app on your phone (e.g., Google Authenticator or
Authy).
Page | 145
4. Install the libpam-google-authenticator Package: Google
Authenticator works with PAM (Pluggable Authentication
Modules), so you need to install the libpam-google-
authenticator package.
This will install the necessary PAM module to enable 2FA for
SSH.
Each user who wishes to use 2FA for SSH must configure Google
Authenticator individually.
Page | 146
3. Follow the Setup Prompts: During the setup, you will be asked
several questions. Here’s a typical sequence of prompts and
their explanations:
Do you want me to update your
"~/.google_authenticator" file?
Type y (Yes) to allow the setup to store the
configuration in a file.
Do you want to disallow multiple uses of the same
authentication token?
Type y (Yes). This makes sure that each code is only
used once.
Do you want to enable time-based tokens?
Type y (Yes). This is the default for Google
Authenticator, and it will generate time-sensitive
codes.
Do you want me to suggest a "disposable" one-time
password (OTP) length?
You can type y or n. It’s typically safe to accept the
default length of 6 digits.
4. Secret Key and QR Code: After answering the prompts, the
tool will display a secret key and a QR code. The QR code can
be scanned using a 2FA app on your mobile device, such as
Google Authenticator or Authy.
Page | 147
Scan the QR code: Open Google Authenticator on your
phone (or any other 2FA app) and scan the QR code
shown on the terminal. This will set up your account on
the app.
Alternatively, you can manually enter the secret key into
the app if scanning the QR code is not possible.
5. Backup Codes: Google Authenticator will also provide a list of
backup codes. These are one-time-use codes that you can use
to log in if you lose access to your 2FA device (phone). Save
them in a secure place.
1. Edit the SSH Configuration File: You need to modify the SSH
configuration file to require both your password and the
Google Authenticator code for login.
Page | 148
This enables the challenge-response authentication method,
which is required for 2FA.
After modifying the SSH and PAM configurations, restart the SSH
service to apply the changes:
Now, SSH will require both your regular password (the first factor)
and a time-based one-time password (TOTP) generated by Google
Authenticator (the second factor).
1. SSH Login Test: Now, when you attempt to log in via SSH, you
will be prompted for both your password and the 6-digit code
from your Google Authenticator app.
Example:
ssh username@hostname
Page | 150
o If you lose access to your 2FA device, use the backup
codes you generated during setup.
Additional Tips
Conclusion
**************************************************************************************
Page | 151
Chapter 15: Backup and Recovery
Backup and recovery processes are critical for ensuring that your
data is protected and recoverable in case of any unexpected
disasters. In this chapter, we’ll explore various backup strategies,
tools, and methods, and understand how to effectively recover
data.
Types of Backups:
1. Full Backup:
o Description: A full backup involves copying all data
(files, directories, etc.) from the source to the backup
location.
o Pros: The most comprehensive form of backup because it
captures everything.
o Cons: Time-consuming and requires a lot of storage
space.
Page | 152
o Use Case: Use full backups for critical data or during the
initial setup of a backup routine.
o Example: If you back up your home directory
(/home/user/), a full backup will copy everything in that
directory to the backup location.
2. Incremental Backup:
o Description: An incremental backup only backs up data
that has changed since the last backup (full or
incremental). This saves time and storage.
o Pros: Faster, more efficient, and requires less storage.
o Cons: Restoring data from incremental backups can be
time-consuming, as you may need to restore the full
backup and then all incremental backups.
o Use Case: Use incremental backups to back up only
newly created or modified files after a full backup.
o Example: If you did a full backup yesterday, today’s
incremental backup will only back up files that were
modified or added since the full backup.
3. Differential Backup:
Page | 153
o Description: A differential backup backs up all the data
that has changed since the last full backup. Unlike
incremental backups, differential backups don’t depend
on other backups and are usually larger than
incremental ones but smaller than full backups.
o Pros: Faster to restore compared to incremental backups.
o Cons: Takes more storage space than incremental
backups.
o Use Case: Use differential backups when you need a
quicker recovery process, as you can restore the full
backup and only the latest differential backup.
o Example: If you did a full backup last week, the
differential backup will capture everything modified or
added since that full backup.
4. Snapshot Backup:
o Description: A snapshot is a copy of the filesystem at a
specific point in time. It captures the state of the system,
allowing you to restore the system to that exact moment.
o Pros: Fast and efficient for large-scale systems, and it
doesn't require copying data.
o Cons: Requires a filesystem that supports snapshots (e.g.,
LVM or ZFS).
o Use Case: Used for environments with high-availability
needs, where minimal downtime is essential.
Page | 154
o Example: On a system with LVM, you can create a
snapshot of the current filesystem, and the snapshot will
contain a consistent state of the system.
1. rsync:
o Description: rsync is one of the most popular tools for
incremental backups. It copies files and directories while
ensuring that only the changes (differences) are
transferred, which makes it efficient for repeated
backups.
o Command:
o rsync -av /source/directory /backup/directory
o Options:
-a: Archive mode, preserves file permissions,
symbolic links, etc.
-v: Verbose mode, shows what’s being copied.
o Example:
Backing up a directory using rsync will only copy
the files that have changed or are new since the last
backup.
o tar: Description: The tar (tape archive) command is used
to create compressed archive files from directories or
files. It’s commonly used for creating full backups.
Page | 155
o Command:
o tar -czvf /backup/backup.tar.gz /source/directory
o Options:
-c: Create a new archive.
-z: Compress the archive using gzip.
-v: Verbose mode, shows the files being archived.
-f: Specify the file name.
o Example:
tar is often used to create full backups of directories.
For example, the /home/user/ directory can be
archived and compressed into a tarball.
2. dd:
o Description: dd is a powerful tool that can be used for
low-level backups, such as creating an exact copy (or
clone) of a disk or partition.
o Command:
o dd if=/dev/sda of=/dev/sdb bs=64K conv=noerror,sync
o Options:
if: Input file (the source disk or partition).
of: Output file (the destination disk or file).
bs: Block size, the amount of data dd reads and
writes at a time.
conv=noerror,sync: Ensure no errors interrupt the
process and synchronize the data.
Page | 156
o Example:
Create a byte-by-byte copy of one disk to another.
This is often used for creating exact system clones
or backups of an entire disk.
3. Backup Software:
o Description: There are other more advanced backup
solutions, such as Bacula, Amanda, and Clonezilla,
which are designed for enterprise-level backups and can
automate backups across multiple systems.
o Use Case: These tools are ideal for larger-scale
environments where you need centralized backup
management, scheduling, and monitoring across
multiple machines.
Page | 157
1. Edit the Crontab: To schedule a backup, you must edit your
crontab file. This is where you define the schedule and
commands for automated tasks.
2. crontab -e
3. Cron Syntax: The cron syntax consists of five fields followed
by the command to run:
4. * * * * * command
5. | | | | |
6. | | | | +---- Day of the week (0 - 7) (Sunday=0 or 7)
7. | | | +------ Month (1 - 12)
8. | | +-------- Day of the month (1 - 31)
9. | +---------- Hour (0 - 23)
10. +------------ Minute (0 - 59)
11. Example: Automating Daily Backups: To schedule a
backup every day at 2 AM using rsync, add this line to your
crontab:
12. 0 2 * * * rsync -av /source/directory /backup/directory
Page | 158
15. Verifying Cron Jobs: To view the current cron jobs for
your user, run:
16. crontab -l
This will copy the backup files back to the original location.
Page | 159
A disaster recovery plan is crucial in case your system crashes, a
disk fails, or data is corrupted. A good DRP ensures you have all the
necessary steps to quickly restore systems and minimize
downtime.
recovery.
Page | 160
1. RAID 0 (Striping):
o Description: Distributes data across multiple disks for
faster performance but offers no data redundancy. If one
disk fails, all data is lost.
o Use Case: Performance-critical applications with low
data risk.
2. RAID 1 (Mirroring):
o Description: Mirrors data across two disks. If one disk
fails, the data remains available on the other disk.
o Use Case: Systems requiring data redundancy (e.g., file
servers).
3. RAID 5 (Striping with Parity):
o Description: Distributes data and parity (error
correction) across multiple disks. It offers a balance
between performance and redundancy.
o Use Case: Common in environments that require high
availability with good performance.
4. RAID 10 (1+0):
o Description: Combines RAID 1 and RAID 0 for both
redundancy and performance. It requires at least four
disks.
o Use Case: High-performance and high-availability
systems.
Page | 161
15.5: Conclusion
**************************************************************************************
Page | 162
Chapter 16: Virtualization and Containers
What is Virtualization?
Each VM has its own full operating system (OS), which behaves like
a complete physical computer. Virtualization can be used to:
Page | 163
Isolate applications to ensure that they don’t interfere with
each other.
Provide resource allocation and management for large-scale
deployments.
Types of Virtualization
Page | 164
1. KVM (Kernel-based Virtual Machine):
o KVM is an open-source, Linux-based hypervisor that
allows you to run virtual machines on Linux hosts. KVM
is built into the Linux kernel, and it provides full
hardware virtualization.
o KVM can support both Linux and Windows guests and is
highly performant because it leverages hardware-
assisted virtualization.
2. Xen:
o Xen is another open-source virtualization platform that
provides both full and para-virtualization. Xen is
commonly used for large-scale server environments,
such as cloud hosting.
o It is known for its scalability and high-performance
capabilities.
3. VirtualBox:
o VirtualBox is a popular virtualization tool for desktops.
It's ideal for testing, development, and creating isolated
environments on personal machines.
Page | 165
o It's cross-platform and supports Linux, Windows, and
macOS guests.
Why Containers?
Page | 166
Portability: Since containers bundle an application with all its
dependencies (libraries, binaries, etc.), they can be run
anywhere — whether on a developer’s laptop, a test
environment, or a cloud-based server.
Scalability: Containers are designed to scale easily. Multiple
instances of a container can be started or stopped quickly
based on demand.
Page | 168
o docker rm <container_id>: Remove a container
Why Kubernetes?
Conclusion
Key Takeaways:
Additional Resources:
Page | 172
KVM Documentation: https://www.linux-kvm.org/
Minikube Documentation:
https://minikube.sigs.k8s.io/docs/
VirtualBox Documentation:
https://www.virtualbox.org/manual/
**************************************************************************************
Page | 173
Chapter 17: High Availability and Clustering
Page | 174
competitors. With HA, you ensure constant availability,
building trust and satisfaction.
Before jumping into the tools and technologies used for HA, it’s
important to understand the basic concepts that make HA work.
1. Failover Clustering
Page | 175
2. Load Balancing
Why use it? Load balancing makes sure that your systems can
handle a larger volume of requests, improving the user
experience by speeding up the response time. It also ensures
that if one server becomes unavailable, the traffic will be
rerouted to the remaining servers without causing any
disruption.
3. Heartbeat Mechanism
Page | 176
Now let’s look at the actual tools and technologies that you can use
to create a High Availability setup.
1. Pacemaker
2. Corosync
1. Start the Services: On both nodes, you’ll start the services for
Pacemaker and Corosync:
Page | 178
2. sudo systemctl start pacemaker
3. sudo systemctl start corosync
4. Set Up the Cluster: You’ll create a cluster by telling Pacemaker
about the nodes:
5. sudo pcs cluster setup --name mycluster node1 node2
6. sudo pcs cluster start --all
vrrp_instance VI_1 {
state MASTER
interface eth0
virtual_router_id 51
priority 101
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.1.100
}
Page | 179
}
Once you have your cluster set up, it's essential to test and monitor
it regularly.
Testing Failover
Test the failover process by shutting down one of the nodes and
ensuring the services are transferred seamlessly to the other node.
Page | 180
sudo pcs status
This command shows the current status of all the nodes and
services in the cluster, helping you identify any issues.
Conclusion
Page | 181
In summary, learning about High Availability and Clustering is a
crucial skill for anyone working with Linux systems, especially for
businesses that need to ensure their applications and services are
available around the clock.
Additional Resources:
Pacemaker Documentation:
https://clusterlabs.org/pacemaker/
Corosync Documentation:
https://corosync.github.io/corosync/
https://wiki.linuxfoundation.org/highavailability/start
**************************************************************************************
Page | 182
Chapter 18: Networking and Advanced Troubleshooting
Page | 183
Remote Access: Provides employees or remote workers
secure access to internal systems.
Privacy: Hides your IP address and encrypts your traffic.
port 1194
proto udp
dev tun
ca /etc/openvpn/keys/ca.crt
Page | 184
cert /etc/openvpn/keys/server.crt
key /etc/openvpn/keys/server.key
dh /etc/openvpn/keys/dh.pem
server 10.8.0.0 255.255.255.0
push "redirect-gateway def1 bypass-dhcp"
push "dhcp-option DNS 8.8.8.8"
push "dhcp-option DNS 8.8.4.4"
keepalive 10 120
Advantages of OpenVPN:
Page | 185
2. DNS (Domain Name System)
Bind9 is one of the most widely used DNS server software on Linux.
Here's how you can set it up:
Advantages of Bind9:
Page | 187
Widely Used: It is one of the most common DNS servers,
making it easy to find support and tutorials.
Highly Configurable: You can configure it for both
authoritative DNS and recursive DNS purposes.
Scalable: Suitable for large-scale DNS management.
When network problems occur, you need the right tools to identify
and resolve the issue. Below are the key tools that will help you
troubleshoot network issues in Linux.
1. Ping
ping google.com
Page | 188
The output will show the time it took for a packet to travel to
google.com and back. If you see Request timeout errors, there
might be a network issue or the target might be down.
2. Traceroute
traceroute google.com
This will show the intermediate routers between your system and
google.com, along with the time it takes to reach each hop.
3. Netstat
Page | 189
Why Use Netstat?
netstat -tuln
4. tcpdump
Page | 190
Filtering: You can filter specific traffic (e.g., only HTTP or DNS
traffic).
To capture traffic for a specific port (e.g., HTTP traffic on port 80):
Advantages of tcpdump:
Page | 191
Highly Configurable: You can configure Apache for complex
requirements, such as hosting multiple websites on the same
server.
Install Apache:
Configuring Apache:
<VirtualHost *:80>
DocumentRoot /var/www/html
ServerName www.example.com
</VirtualHost>
Troubleshooting Apache:
Page | 192
Conclusion
Additional Resources:
https://bind9.readthedocs.io/en/latest/
Page | 193
Apache Documentation: https://httpd.apache.org/docs/
tcpdump Documentation:
https://www.tcpdump.org/manpages/tcpdump.1.html
**************************************************************************************
Page | 194
19. Linux for Cloud Services
Page | 195
need for local installations or maintenance. Examples include
Google Workspace, Microsoft 365, and Salesforce.
To get started:
Page | 196
o Configure networking, security groups (open port 22 for
SSH access).
3. Create a Key Pair:
o AWS requires an SSH key to securely access your EC2
instance. Create a key pair and download the private key
file (.pem).
4. Connect to the Instance:
o Once the instance is running, you can connect to it using
SSH:
o ssh -i /path/to/your-key.pem ec2-user@<public-ip>
Page | 197
o Select the VM size (e.g., B1s for free-tier).
o Configure networking, select a VNet, and set up firewall
rules.
3. SSH Key Pair:
o Create a new SSH key pair for connecting to the VM, or
use an existing one.
4. Connect via SSH:
o Once the VM is deployed, you can connect via SSH:
o ssh azureuser@<public-ip>
Page | 198
o Choose the instance type (e.g., e2-micro for lightweight
workloads).
o Configure the networking settings and create necessary
firewall rules.
3. SSH Key Pair:
o Google Cloud provides an easy option to create SSH keys
directly in the console.
4. Connect via SSH:
o Once the VM is ready, click the SSH button in the Google
Cloud Console to connect.
Page | 199
19.3.1: Networking Configurations
Page | 200
19.3.2: Storage Configurations
Page | 201
19.4: Scaling and Automation Using Cloud Tools
19.4.1: Auto-Scaling
Documentation.
Documentation](https://cloud
.google.com/compute/docs/autoscaler).
Documentation.
infrastructure deployment.
Page | 203
o Learn more from the Azure Resource Manager Templates
Documentation.
Documentation.
Conclusion
By leveraging the cloud, you can deploy, manage, and scale Linux-
based virtual machines across various cloud platforms. Each cloud
provider offers its own set of tools and documentation, which you
can explore using the links provided. Whether you're using AWS,
Azure, or Google Cloud, you can take full advantage of their
services to build and manage your infrastructure efficiently.
*************************************************************************************
Page | 204
Chapter 20: Preparing for Certification and Career in Linux
CompTIA Linux+
Red Hat Certified System Administrator (RHCSA)
Linux Professional Institute Certification (LPIC-1)
Certified Kubernetes Administrator (CKA)
AWS Certified Solutions Architect – Associate
Google Cloud Professional Cloud Architect
Page | 205
For more details on these certifications, refer to the links provided
earlier in the chapter.
Role Description:
Page | 206
2. DevOps Engineer
Role Description:
3. Cloud Engineer
Role Description:
Page | 207
knowledge of Linux is crucial for managing cloud instances,
scaling applications, and handling cloud automation.
Role Description:
5. Linux Consultant
Page | 208
Salary in USD: $90,000 - $130,000 per year
Salary in INR: ₹6,75,000 - ₹97,50,000 per year
Role Description:
Role Description:
Page | 209
How to Get Started:
After you have completed the book, you will be well-prepared for a
variety of entry-level, mid-level, and even advanced roles in the
Linux ecosystem. Here's a step-by-step guide on how to progress in
your career:
After completing this book, you will have the fundamental skills
required for entry-level positions such as:
Page | 210
o Where to Apply: IT support firms, large organizations
requiring dedicated Linux support, or remote support
positions.
If you’ve had 1-3 years of hands-on experience, you can aim for
intermediate roles such as:
DevOps Engineer
o Responsibilities include automating deployments,
managing CI/CD pipelines, and monitoring system
performance. Knowledge of tools like Docker,
Kubernetes, and Jenkins is essential.
o Where to Apply: Tech companies, cloud providers, or
software development companies.
Cloud Engineer (Linux Focus)
o Responsibilities include managing cloud infrastructure
and services, particularly in environments like AWS,
Azure, or Google Cloud.
o Where to Apply: Cloud service providers, tech firms that
use cloud technologies, or consulting agencies.
Page | 211
Linux Security Engineer
o Responsibilities include configuring firewalls,
preventing unauthorized access, and securing systems
from threats.
o Where to Apply: Large enterprises, government
organizations, security consulting firms.
Linux Consultant
o Responsibilities include advising organizations on best
practices for Linux environments, implementing
solutions, and optimizing performance.
o Where to Apply: Consulting firms, independent
freelance work, or businesses with large-scale Linux
infrastructures.
Page | 212
networking opportunities with industry professionals. You
can learn from experts and discover new job openings.
Join Forums and Communities: Stay active in Linux-related
forums and mailing lists like Stack Exchange, Reddit’s
r/linux, and LinuxQuestions. This will not only help you
solve problems but also connect you with potential employers
and colleagues.
Conclusion
**************************************************************************************
Page | 213
Useful Commands Cheat Sheet
1. File Management
Command Description
Page | 214
Command Description
chown <owner>:<group>
Change file owner and group.
<file>
Page | 215
Command Description
Command Description
Page | 216
Command Description
3. Process Management
Command Description
killall
Kill all processes with a specific name.
<process_name>
Page | 217
4. Networking
Command Description
Page | 218
Command Description
Command Description
Page | 219
Command Description
6. File Permissions
Command Description
chown
<user>:<group> Change ownership of a file or directory.
<file>
Command Description
Page | 220
Command Description
8. System Administration
Command Description
Page | 221
Command Description
systemctl start
Start a system service.
<service>
9. Package Management
Command Description
Page | 222
Command Description
Command Description
Page | 223
Command Description
Command Description
Page | 224
Command Description
Command Description
Page | 225
Command Description
mount -o loop
Mount an ISO image file.
<iso_file> /mnt
Command Description
Page | 226
Command Description
Command Description
Page | 227
Command Description
grep <search_term>
Search for a specific term in syslog.
/var/log/syslog
Command Description
Page | 228
Command Description
scp <source>
Secure copy a file over SSH.
<user>@<host>:<destination>
Command Description
Display disk space usage in a
df -h
human-readable format.
du -sh <directory> Show disk usage for a directory.
tar -czvf <archive.tar.gz> Create a compressed .tar.gz
<directory> archive.
Extract files from a .tar.gz
tar -xzvf <archive.tar.gz>
archive.
Page | 229
Command Description
Sync directories across systems
rsync -av <source> <destination>
with speed and reliability.
rsync -av --delete <source> Sync and delete files that are no
<destination> longer present in the source.
Create a disk image or clone a
dd if=<source> of=<destination>
disk.
dd if=/dev/sda of=/dev/sdb Clone a hard drive from one to
bs=64K conv=noerror,sync another.
Command Description
chown <user>:<group>
Change the owner and group of a file.
<file>
Page | 230
Command Description
Command Description
top View running processes in real time.
Interactive process viewer (improved
htop
version of top).
View a snapshot of all running
ps aux
processes.
Terminate a process by its Process ID
kill <pid>
(PID).
Terminate all processes with a specific
killall <process_name>
name.
nice -n <priority>
Run a command with a specific priority.
<command>
renice <priority> <pid> Change the priority of a running process.
Display how long the system has been
uptime
running.
Show memory usage (RAM and swap) in
free -h
human-readable format.
Page | 231
Command Description
Page | 232
Command Description
Command Description
Page | 233
Command Description
Page | 234
Command Description
chown <user>:<group>
Change the owner and group of a file.
<file>
Page | 235
21. Networking Commands
Command Description
Page | 236
Command Description
Command Description
Page | 237
Command Description
Page | 238
Command Description
Command Description
List files with detailed information,
ls -l
including permissions.
Page | 239
Command Description
Change file permissions (e.g., chmod 755
chmod
filename to set rwx for owner and rx for
<permissions> <file>
others).
chmod +x <file> Add execute permissions to a file.
chmod -x <file> Remove execute permissions from a file.
chown
Change file owner and group (e.g., chown
<user>:<group>
john:admin file.txt).
<file>
Change the group ownership of a file (e.g.,
chgrp <group> <file>
chgrp admin file.txt).
Set default permissions for newly created
umask <value>
files.
Show detailed status of a file, including
stat <file>
permissions, size, and last modified time.
Command Description
Page | 240
Command Description
Command Description
Page | 241
Command Description
Command Description
sudo crontab -e Edit the cron jobs for the root user.
Page | 242
Command Description
@reboot
Run a command once at system reboot.
<command>
Command Description
Page | 243
Command Description
Page | 244
35. Security Tools
Command Description
Page | 245
Command Description
Command Description
Page | 246
Command Description
Command Description
Page | 247
Command Description
ps aux --sort=-
Display processes sorted by memory usage.
%mem
ps aux --sort=-
Display processes sorted by CPU usage.
%cpu
Command Description
Page | 248
Command Description
Page | 249
Command Description
Command Description
apt-cache search
Search for a package in the repository.
<package>
Page | 250
Command Description
Command Description
yum install
Install a package (e.g., yum install vim).
<package>
yum remove
Remove a package from the system.
<package>
yum search
Search for a package in the repository.
<package>
Command Description
Page | 251
Command Description
virsh start
Start a virtual machine.
<vm_name>
virsh shutdown
Shut down a virtual machine gracefully.
<vm_name>
virsh suspend
Suspend a running virtual machine.
<vm_name>
virsh resume
Resume a suspended virtual machine.
<vm_name>
Command Description
Page | 252
Command Description
Command Description
shutdown -h
Shut down the system immediately.
now
Page | 253
Command Description
Page | 254
Command Description
Command Description
Page | 255
Command Description
Command Description
Page | 256
Command Description
tar -czvf
Create a compressed tarball archive
<archive_name>.tar.gz
of a directory.
<dir>
tar -xzvf
Extract a compressed tarball archive.
<archive_name>.tar.gz
cp -r <source>
Copy a directory recursively.
<destination>
Page | 257
Command Description
grep <pattern>
Search a log file for a specific pattern.
<logfile>
Command Description
ps aux --sort=-
List processes sorted by memory usage.
%mem
Page | 258
Command Description
renice <pid>
Change the priority of an existing process.
<priority>
Command Description
Page | 259
Command Description
Page | 260
Command Description
Page | 261
Glossary: Key Terms and Concepts Explained
Kernel:
The core component of an operating system that manages
hardware, system resources, and communication between
software and hardware. In Linux, the kernel is open-source
and highly configurable.
Shell:
A command-line interface used to interact with the operating
system. Popular shells in Linux include Bash (Bourne Again
Shell), Zsh (Z Shell), and Fish (Friendly Interactive Shell).
Filesystem:
The method and structure used to store and organize files on
a disk. Linux supports multiple filesystem types like ext4,
XFS, and Btrfs.
Package Manager:
Page | 262
Permissions:
Rules that define who can read, write, and execute a file or
directory. In Linux, permissions are managed using rwx
(read, write, execute) for the file owner, group, and others.
Process:
An instance of a program running on the system. Each process
has a unique process ID (PID), and they can be managed with
commands like ps, top, and kill.
Daemon:
A background process that runs without interaction from the
user. Examples include cron (for scheduling tasks) and sshd
(the SSH server).
Root User:
Sudo:
A command that allows users to execute administrative
commands with superuser privileges. It’s safer than logging
in as the root user directly.
SSH (Secure Shell):
Page | 263
A protocol for securely accessing and managing remote
systems over a network. It’s commonly used to log into Linux
servers remotely.
Cron:
A time-based job scheduler in Unix-like operating systems. It
allows users to schedule tasks (such as running backups or
updates) at specified times.
Virtualization:
The process of running multiple operating systems on a single
physical machine. KVM (Kernel-based Virtual Machine) is a
popular virtualization technology in Linux.
Containerization:
A lightweight form of virtualization that involves running
applications in isolated environments called containers.
Docker is one of the most widely used container platforms.
Systemd:
A system and service manager for Linux, responsible for
initializing system services during boot and managing them
while the system is running.
Page | 264
Dedication
To Linus Torvalds,
Page | 265
Thank You!
I would like to take a moment to extend my heartfelt thanks to you
for purchasing and reading "Linux Unleashed: A Comprehensive
Guide from Beginner to Pro." It’s been an incredible journey
putting this book together, and I’m honoured that you’ve chosen it
as a resource in your quest to master Linux.
Whether you’re just starting out or are already a seasoned pro, your
commitment to learning and growing is inspiring. I hope the
knowledge and tools shared in this book will help you unlock the
full potential of Linux and empower you to take on new challenges
with confidence.
Your support means the world to me, and I deeply appreciate you
investing your time and trust in this work. As you continue your
journey through the world of Linux, I encourage you to explore,
experiment, and always keep learning. The world of open-source
software is vast, and there’s always something new to discover!
Thank you again for your support, and I wish you the very best on
your Linux adventure!
Sincerely,
[S.RATHORE]
Author of Linux Unlocked: From Novice to Expert
Page | 266
Page | 267