0% found this document useful (0 votes)
13 views500 pages

Linux Notes 1 500

Uploaded by

Siddhesh18
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views500 pages

Linux Notes 1 500

Uploaded by

Siddhesh18
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 500

Inside Linux

 Kernel

o The core of the UNIX system. Loaded at system start up (boot). Memory-resident control
program.
o Manages the entire resources of the system, presenting them to you and every other user
as a coherent system. Provides service to user applications such as device management,
process scheduling, etc.
o Example functions performed by the kernel are:
 Managing the machine's memory and allocating it to each process.
 Scheduling the work done by the CPU so that the work of each user is carried out
as efficiently as is possible.
 Accomplishing the transfer of data from one part of the machine to another
 Interpreting and executing instructions from the shell
 Enforcing file access permissions

o You do not need to know anything about the kernel in order to use a UNIX system. These
details are provided for your information only.

 Shell

o Whenever you login to a Unix system you are placed in a shell program. The shell's
prompt is usually visible at the cursor's position on your screen. To get your work done,
you enter commands at this prompt.
o The shell is a command interpreter; it takes each command and passes it to the operating
system kernel to be acted upon. It then displays the results of this operation on your
screen.
o Several shells are usually available on any UNIX system, each with its own strengths and
weaknesses.
o Different users may use different shells. Initially, your system adminstrator will supply a
default shell, which can be overridden or changed. The most commonly available shells
are:
 Bourne shell (sh)
 C shell (csh)
 Korn shell (ksh)
 TC Shell (tcsh)
 Bourne Again Shell (bash)
o Each shell also includes its own programming language. Command files, called "shell
scripts" are used to accomplish a series of tasks.

 Utilities

o UNIX provides several hundred utility programs, often referred to as commands.


o Accomplish universal functions
 editing
 file maintenance
 printing
 sorting
 programming support
 online info etc.
o Modular: single functions can be grouped to perform more complex tasks
Operating system

An operating system or OS is a software program that enables the computer hardware to communicate
and operate with the computer software. Without a computer operating system, a computer and software
programs would be useless.
An operating system (sometimes abbreviated as "OS") is the program that, after being initially loaded into
the computer by a boot program, manages all the other programs in a computer. The other programs are
called applications or application programs. The application programs make use of the operating system
by making requests for services through a defined application program interface (API). In addition, users
can interact directly with the operating system through a user interface such as a command language or a
graphical user interface (GUI).

An operating system performs these services for applications:


 In a multitasking operating system where multiple programs can be running at the same time,
the operating system determines which applications should run in what order and how much
time should be allowed for each application before giving another application a turn.
 It manages the sharing of internal memory among multiple applications.
 It handles input and output to and from attached hardware devices, such as hard disks, printers,
and dial-up ports.
 It sends messages to each application or interactive user (or to a system operator) about the status
of operation and any errors that may have occurred.
 It can offload the management of what are called batch jobs (for example, printing) so that the
initiating application is freed from this work.
 On computers that can provide parallel processing, an operating system can manage how to
divide the program so that it runs on more than one processor at a time.

Examples of computer operating systems


 Redhat – Very popular Linux operating system from Redhat
 Microsoft Windows - PC and IBM compatible operating system. Microsoft Windows is the most
commonly found and used operating system in PCs
 Apple MacOS - Apple computer operating system. The only Apple computer operating system.
 Ubuntu Linux - A popular variant of Linux used with PC and IBM compatible computers.
 Google Android - operating system used with Android compatible phones.
 iOS - Operating system used with the Apple iPhone.
Various Parts of an Operating System

UNIX and 'UNIX-like' operating systems (such as Linux) consist of a kernel and some system programs.
There are also some application programs for doing work. The kernel is the heart of the operating system.
In fact, it is often mistakenly considered to be the operating system itself, but it is not. An operating
system provides many more services than a plain kernel.

It keeps track of files on the disk, starts programs and runs them concurrently, assigns memory and other
resources to various processes, receives packets from and sends packets to the network, and so on. The
kernel does very little by itself, but it provides tools with which all services can be built. It also prevents
anyone from accessing the hardware directly, forcing everyone to use the tools it provides. This way the
kernel provides some protection for users from each other. The tools provided by the kernel are used via
system calls.

The system programs use the tools provided by the kernel to implement the various services required
from an operating system. System programs, and all other programs, run `on top of the kernel', in what is
called the user mode. The difference between system and application programs is one of intent:
applications are intended for getting useful things done (or for playing, if it happens to be a game),
whereas system programs are needed to get the system working. A word processor is an application;
mount is a system program. The difference is often somewhat blurry, however, and is important only to
compulsive categorizers.

An operating system can also contain compilers and their corresponding libraries (GCC and the C library
in particular under Linux), although not all programming languages need be part of the operating
system. Documentation, and sometimes even games, can also be part of it.

Important parts of the kernel

The Linux kernel consists of several important parts:

 Process management
 Memory management
 Hardware device drivers
 Filesystem drivers
 Network management
 Various other bits and pieces
The following figure shows some of the more important parts of the Linux kernel

Probably the most important parts of the kernel (nothing else works without them) are memory
management and process management. Memory management takes care of assigning memory areas and
swap space areas to processes, parts of the kernel, and for the buffer cache. Process management creates
processes, and implements multitasking by switching the active process on the processor.

At the lowest level, the kernel contains a hardware device driver for each kind of hardware it supports.
Since the world is full of different kinds of hardware, the number of hardware device drivers is large.
There are often many otherwise similar pieces of hardware that differ in how they are controlled by
software. The similarities make it possible to have general classes of drivers that support similar
operations; each member of the class has the same interface to the rest of the kernel but differs in what it
needs to do to implement them. For example, all disk drivers look alike to the rest of the kernel, i.e., they
all have operations like `initialize the drive', `read sector N', and `write sector N'.
What is virtual memory?

Linux supports virtual memory, that is, using a disk as an extension of RAM so that the effective size of
usable memory grows correspondingly. The kernel will write the contents of a currently unused block of
memory to the hard disk so that the memory can be used for another purpose. When the original contents
are needed again, they are read back into memory. This is all made completely transparent to the user;
programs running under Linux only see the larger amount of memory available and don't notice that
parts of them reside on the disk from time to time. Of course, reading and writing the hard disk is slower
(on the order of a thousand times slower) than using real memory, so the programs don't run as fast. The
part of the hard disk that is used as virtual memory is called the swap space.

Linux can use either a normal file in the filesystem or a separate partition for swap space. A swap
partition is faster, but it is easier to change the size of a swap file (there's no need to repartition the whole
hard disk, and possibly install everything from scratch). When you know how much swap space you
need, you should go for a swap partition, but if you are uncertain, you can use a swap file first, use the
system for a while so that you can get a feel for how much swap you need, and then make a swap
partition when you're confident about its size.

You should also know that Linux allows one to use several swap partitions and/or swap files at the same
time. This means that if you only occasionally need an unusual amount of swap space, you can set up an
extra swap file at such times, instead of keeping the whole amount allocated all the time.

A note on operating system terminology: computer science usually distinguishes between swapping
(writing the whole process out to swap space) and paging (writing only fixed size parts, usually a few
kilobytes, at a time). Paging is usually more efficient, and that's what Linux does, but traditional Linux
terminology talks about swapping anyway.
Linux Structure

Linux is a layered operating system. The innermost layer is the hardware that provides the services for
the OS. The operating system, referred to in Linux as the kernel, interacts directly with the hardware and
provides the services to the user programs. These user programs don’t need to know anything about the
hardware. They just need to know how to interact with the kernel and it’s up to the kernel to provide the
desired service. One of the big appeals of Linux to programmers has been that most well written user
programs are independent of the underlying hardware, making them readily portable to new systems.

User programs interact with the kernel through a set of standard system calls. These system calls request
services to be provided by the kernel. Such services would include accessing a file: open close, read,
write, link, or execute a file; starting or updating accounting records; changing ownership of a file or
directory; changing to a new directory; creating, suspending, or killing a process; enabling access to
hardware devices; and setting limits on system resources.

Linux is a multi-user, multi-tasking operating system. You can have many users logged into a system
simultaneously, each running many programs. It’s the kernel’s job to keep each process and user separate
and to regulate access to system hardware, including cpu, memory, disk and other I/O devices.
Linux vs. Windows

Linux and Windows. Each has its own set of unique features, advantages and disadvantages. While it is
difficult to say which one is the better choice, it is not as difficult to answer which is the better choice
given your needs.

Note: The operating system that you use on your desktop computer (the vast majority of people use some flavor of
Windows) has absolutely nothing to do with the one that your host needs to serve your web site. Most personal sites
are created with MS FrontPage and even although that is a Microsoft product, it can be hosted perfectly on a
LINUX web server with FrontPage Extensions installed.

Stability:
LINUX systems (we actually use Linux but for comparison purposes they are identical) are hands-down
the winner in this category. There are many factors here but to name just a couple big ones: in our
experience LINUX handles high server loads better than Windows and LINUX machines seldom require
reboots while Windows is constantly needing them. Servers running on LINUX enjoy extremely high up-
time and high availability/reliability.

Performance:
While there is some debate about which operating system performs better, in our experience both
perform comparably in low-stress conditions however LINUX servers under high load (which is what is
important) are superior to Windows.

Scalability:
Web sites usually change over time. They start off small and grow as the needs of the person or
organization running them grow. While both platforms can often adapt to your growing needs, Windows
hosting is more easily made compatible with LINUX-based programming features like PHP and MySQL.
LINUX-based web software is not always 100% compatible with Microsoft technologies like .NET and VB
development. Therefore if you wish to use these, you should choose Windows web hosting.

Compatibility:
Web sites designed and programmed to be served under a LINUX-based web server can easily be hosted
on a Windows server, whereas the reverse is not always true. This makes programming for LINUX the
better choice.

Price:
Servers hosting your web site require operating systems and licenses just like everyone else. Windows
2003 and other related applications like SQL Server each cost a significant amount of money; on the other
hand, Linux is a free operating system to download, install and operate. Windows hosting results in
being a more expensive platform.
Conclusion:
To sum it up, LINUX-based hosting is more stable, performs faster and more compatible than Windows-
based hosting. You only need Windows hosting if you are going to developing in .NET or Visual Basic, or
some other application that limits your choices
Logging On To System

 Before you can begin to use the system you will need to have a valid username and a password.
Assignment of usernames and initial passwords is typically handled by the System
Administrator
 Your username, also called a userid, should be unique and should not change. Initial passwords
can be anything and should be changed after your first login.

To login to your account

 Type your username at the login prompt, initial of your first name followed by last name (e.g
iafzal). LINUX is case sensitive - if your username is kellyk do not type KellyK . Press the
RETURN or ENTER key after typing your username.
 When the password prompt appears, type in your password. Your password is never displayed
on the screen as a security measure. It also is case sensitive. Press the RETURN or ENTER key
after entering your password.
 What happens after you successfully login depends upon your system, many LINUX systems
will display a login banner or "message of the day". Make a habit of reading this since it may
contain important information about the system.
 Other LINUX systems will automatically configure your environment and open one or more
windows for you to do work in.
 You should see a prompt - usually a percent sign (%) or dollar sign ($). This is called the "shell
prompt" (the shell is discussed in detail later). It indicates that the system is ready to accept
commands from you.

If your login attempt was unsuccessful, there are several possible reasons:

 You made a typing error while entering your username or password


 The CAPS LOCK key is on and everything is being sent to the system in uppercase letters.
 You have an expired or invalid username or password, or the system security has changed
 There are system problems

Example of user login

login: kellyk
kellyk's Password:
************************************************************
* Welcome to the Linux Systems Training Class
************************************************************
*
* Hello! (Greetings)
*
* System maintenance is scheduled today from 2:00
* until 4:00 pm EST
*
* (Thank you very much)
*
************************************************************

Your Home Directory


 Each user has a unique "home" directory. Your home directory is that part of the file system
reserved for your files.
 After login, you are "put" into your home directory automatically. This is where you start your
work.
 You are in control of your home directory and the files which reside there. You are also in control
of the file access permissions (discussed later) to the files in your home directory. Generally, you
alone should be able to create/delete/modify files in your home directory. Others may have
permission to read or execute your files as you determine.
 In most LINUX systems, you can "move around" or navigate to other parts of the file system
outside of your home directory. This depends upon how the file permissions have been set by
others and/or the System Administrator
Linux File System

A file system is a logical collection of files on a partition or disk. A partition is a container for information
and can span an entire hard drive if desired.

Your hard drive can have various partitions which usually contains only one file system, such as one file
system housing the / file system or another containing the /home file system.

One file system per partition allows for the logical maintenance and management of differing file
systems.

Everything in Linux is considered to be a file, including physical devices such as DVD-ROMs, USB
devices, floppy drives, and so forth.

Directory Structure:
Linux uses a hierarchical file system structure, much like an upside-down tree, with root (/) at the base of
the file system and all other directories spreading from there.

A LINUX filesystem is a collection of files and directories that has the following properties:

It has a root directory (/) that contains other files and directories.

Each file or directory is uniquely identified by its name, the directory in which it resides, and a unique
identifier, typically called an inode.

By convention, the root directory has an inode number of 2 and the lost+found directory has an inode
number of 3. Inode numbers 0 and 1 are not used. File inode numbers can be seen by specifying the -i
option to ls command.

It is self contained. There are no dependencies between one filesystem and any other.
File System:

What are filesystems?


A filesystem is the methods and data structures that an operating system uses to keep track of files on a
disk or partition; that is, the way the files are organized on the disk. The word is also used to refer to a
partition or disk that is used to store the files or the type of the filesystem. Thus, one might say ``I have
two filesystems'' meaning one has two partitions on which one stores files, or that one is using the
``extended filesystem'', meaning the type of the filesystem.

The difference between a disk or partition and the filesystem it contains is important. A few programs
(including, reasonably enough, programs that create filesystems) operate directly on the raw sectors of a
disk or partition; if there is an existing file system there it will be destroyed or seriously corrupted. Most
programs operate on a filesystem, and therefore won't work on a partition that doesn't contain one (or
that contains one of the wrong types).

Before a partition or disk can be used as a filesystem, it needs to be initialized, and the bookkeeping data
structures need to be written to the disk. This process is called making a filesystem.

Most LINUX filesystem types have a similar general structure, although the exact details vary quite a bit.
The central concepts are superblock, inode , data block, directory block , and indirection block. The
superblock contains information about the filesystem as a whole, such as its size (the exact information
here depends on the filesystem). An inode contains all information about a file, except its name. The
name is stored in the directory, together with the number of the inode. A directory entry consists of a
filename and the number of the inode which represents the file. The inode contains the numbers of
several data blocks, which are used to store the data in the file. There is space only for a few data block
numbers in the inode, however, and if more are needed, more space for pointers to the data blocks is
allocated dynamically. These dynamically allocated blocks are indirect blocks; the name indicates that in
order to find the data block, one has to find its number in the indirect block first.

LINUX filesystems usually allow one to create a hole in a file (this is done with the lseek() system call;
check the manual page), which means that the filesystem just pretends that at a particular place in the file
there is just zero bytes, but no actual disk sectors are reserved for that place in the file (this means that the
file will use a bit less disk space). This happens especially often for small binaries, Linux shared libraries,
some databases, and a few other special cases. (Holes are implemented by storing a special value as the
address of the data block in the indirect block or inode. This special address means that no data block is
allocated for that part of the file, ergo, there is a hole in the file.)

Comparing Filesystem Features

FS Name Year Original OS Max File Max FS Size Journaling


Introduced Size
FAT16 1983 MSDOS V2 4GB 16MB to 8GB N
FAT32 1997 Windows 95 4GB 8GB to 2TB N
FS Name Year Original OS Max File Max FS Size Journaling
Introduced Size
HPFS 1988 OS/2 4GB 2TB N
NTFS 1993 Windows NT 16EB 16EB Y
HFS+ 1998 Mac OS 8EB ? N
UFS2 2002 FreeBSD 512GB to 1YB N
32PB
ext2 1993 Linux 16GB to 2TB4 2TB to 32TB N
ext3 1999 Linux 16GB to 2TB4 2TB to 32TB Y
ReiserFS3 2001 Linux 8TB8 16TB Y
ReiserFS4 2005 Linux ? ? Y
XFS 1994 IRIX 9EB 9EB Y
JFS ? AIX 8EB 512TB to 4PB Y
VxFS 1991 SVR4.0 16EB ? Y
ZFS 2004 Solaris 10 1YB 16EB N

This topic is loosely based on the Filesystems Hierarchy Standard (FHS), which attempts to set a standard
for how the directory tree in a Linux system should be organized. Such a standard has the advantage
that it will be easier to write or port software for Linux, and to administer Linux machines, since
everything should be in standardized places. There is no authority behind the standard that forces
anyone to comply with it, but it has gained the support of many Linux distributions. It is not a good idea
to break with the FHS without very compelling reasons. The FHS attempts to follow Linux tradition and
current trends, making Linux systems familiar to those with experience with other Linux systems, and
vice versa.

The full directory tree is intended to be breakable into smaller parts, each capable of being on its own disk
or partition, to accommodate to disk size limits and to ease backup and other system administration
tasks. The major parts are the root (/ ), /usr , /var , and /home filesystems (see the following figure). Each
part has a different purpose. The directory tree has been designed so that it works well in a network of
Linux machines which may share some parts of the filesystems over a read-only device (e.g., a CD-ROM),
or over the network with NFS.

Parts of a Linux directory tree. Dashed lines indicate partition limits

The roles of the different parts of the directory tree are described below
 The root filesystem is specific for each machine (it is generally stored on a local disk, although it could be
a ramdisk or network drive as well) and contains the files that are necessary for booting the system up,
and to bring it up to such a state that the other filesystems may be mounted. The contents of the root
filesystem will therefore be sufficient for the single user state. It will also contain tools for fixing a broken
system, and for recovering lost files from backups.

 The /usr filesystem contains all commands, libraries, manual pages, and other unchanging files needed
during normal operation. No files in /usr should be specific for any given machine, nor should they be
modified during normal use. This allows the files to be shared over the network, which can be cost-
effective since it saves disk space (there can easily be hundreds of megabytes, increasingly multiple
gigabytes in /usr). It can make administration easier (only the master /usr needs to be changed when
updating an application, not each machine separately) to have /usr network mounted. Even if the
filesystem is on a local disk, it could be mounted read-only, to lessen the chance of filesystem corruption
during a crash.

 The /var filesystem contains files that change, such as spool directories (for mail, news, printers, etc), log
files, formatted manual pages, and temporary files. Traditionally everything in /var has been somewhere
below /usr , but that made it impossible to mount /usr read-only.

 The /home filesystem contains the users' home directories, i.e., all the real data on the system. Separating
home directories to their own directory tree or filesystem makes backups easier; the other parts often do
not have to be backed up, or at least not as often as they seldom change. A big /home might have to be
broken across several filesystems, which requires adding an extra naming level below /home, for example
/home/students and /home/staff.

Although the different parts have been called filesystems above, there is no requirement that they
actually be on separate filesystems. They could easily be kept in a single one if the system is a small
single-user system and the user wants to keep things simple. The directory tree might also be divided
into filesystems differently, depending on how large the disks are, and how space is allocated for various
purposes. The important part, though, is that all the standard names work; even if, say, /var and /usr are
actually on the same partition, the names /usr/lib/libc.a and /var/log/messages must work, for example by
moving files below /var into /usr/var, and making /var a symlink to /usr/var.

The Linux filesystem structure groups files according to purpose, i.e., all commands are in one place, all
data files in another, documentation in a third, and so on. An alternative would be to group files
according to the program they belong to, i.e., all Emacs files would be in one directory, all TeX in another,
and so on. The problem with the latter approach is that it makes it difficult to share files (the program
directory often contains both static and sharable and changing and non-sharable files), and sometimes to
even find the files (e.g., manual pages in a huge number of places, and making the manual page
programs find all of them is a maintenance nightmare).

The root filesystem should generally be small, since it contains very critical files and a small, infrequently
modified filesystem has a better chance of not getting corrupted. A corrupted root filesystem will
generally mean that the system becomes unbootable except with special measures (e.g., from a floppy), so
you don't want to risk it.
The root directory generally doesn't contain any files, except perhaps on older systems where the
standard boot image for the system, usually called /vmlinuz was kept there. (Most distributions have
moved those files the the /boot directory.
1. / – Root

 Every single file and directory starts from the root directory.
 Only root user has write privilege under this directory.
 Please note that /root is root user’s home directory, which is not same as /.

2. /bin – User Binaries

 Contains binary executables.


 Common linux commands you need to use in single-user modes are located under this
directory.
 Commands used by all the users of the system are located here.
 For example: ps, ls, ping, grep, cp.

3. /sbin – System Binaries

 Just like /bin, /sbin also contains binary executables.


 But, the linux commands located under this directory are used typically by system
aministrator, for system maintenance purpose.
 For example: iptables, reboot, fdisk, ifconfig, swapon

4. /etc – Configuration Files

 Contains configuration files required by all programs.


 This also contains startup and shutdown shell scripts used to start/stop individual
programs.
 For example: /etc/resolv.conf, /etc/logrotate.conf

5. /dev – Device Files

 Contains device files.


 These include terminal devices, usb, or any device attached to the system.
 For example: /dev/tty1, /dev/usbmon0

6. /proc – Process Information

 Contains information about system process.


 This is a pseudo filesystem contains information about running process. For example:
/proc/{pid} directory contains information about the process with that particular pid.
 This is a virtual filesystem with text information about system resources. For example:
/proc/uptime

7. /var – Variable Files

 var stands for variable files.


 Content of the files that are expected to grow can be found under this directory.
 This includes — system log files (/var/log); packages and database files (/var/lib); emails
(/var/mail); print queues (/var/spool); lock files (/var/lock); temp files needed across
reboots (/var/tmp);

8. /tmp – Temporary Files

 Directory that contains temporary files created by system and users.


 Files under this directory are deleted when system is rebooted.

9. /usr – User Programs

 Contains binaries, libraries, documentation, and source-code for second level programs.
 /usr/bin contains binary files for user programs. If you can’t find a user binary under /bin,
look under /usr/bin. For example: at, awk, cc, less, scp
 /usr/sbin contains binary files for system administrators. If you can’t find a system binary
under /sbin, look under /usr/sbin. For example: atd, cron, sshd, useradd, userdel
 /usr/lib contains libraries for /usr/bin and /usr/sbin
 /usr/local contains users programs that you install from source. For example, when you
install apache from source, it goes under /usr/local/apache2

10. /home – Home Directories

 Home directories for all users to store their personal files.


 For example: /home/john, /home/nikita

11. /boot – Boot Loader Files

 Contains boot loader related files.


 Kernel initrd, vmlinux, grub files are located under /boot
 For example: initrd.img-2.6.32-24-generic, vmlinuz-2.6.32-24-generic

12. /lib – System Libraries

 Contains library files that supports the binaries located under /bin and /sbin
 Library filenames are either ld* or lib*.so.*
 For example: ld-2.11.1.so, libncurses.so.5.7

13. /opt – Optional add-on Applications

 opt stands for optional.


 Contains add-on applications from individual vendors.
 add-on applications should be installed under either /opt/ or /opt/ sub-directory.

14. /mnt – Mount Directory


 Temporary mount directory where sysadmins can mount filesystems.

15. /media – Removable Media Devices

 Temporary mount directory for removable devices.


 For examples, /media/cdrom for CD-ROM; /media/floppy for floppy drives;
/media/cdrecorder for CD writer

16. /srv – Service Data

 srv stands for service.


 Contains server specific services related data.
 For example, /srv/cvs contains CVS related data
File Names

 LINUX permits file names to use most characters, but avoid spaces, tabs and characters that have
a special meaning to the shell, such as:

& ; ( ) | ? \ ' " ` [ ] { } < > $ - ! /

 Case Sensitivity: uppercase and lowercase are not the same! These are three different files:

NOVEMBER November november

 Length: can be up to 256 characters

 Extensions: may be used to identify types of files


libc.a - archive, library file
program.c - C language source file
alpha2.f - Fortran source file
xwd2ps.o - Object/executable code
mygames.Z - Compressed file

 Hidden Files: have names that begin with a dot (.) For example:
.cshrc .login .mailrc .mwmrc

 Uniqueness: as children in a family, no two files with the same parent directory can have the
same name. Files located in separate directories can have identical names.

 Reserved Filenames:
/ - the root directory (slash)
. - current directory (period)
.. - parent directory (double period)
~ - your home directory (tilde)
Passwords Standards

When your account is issued, you will be given an initial password. It is important for system and
personal security that the password for your account be changed to something of your choosing. The
command for changing a password is "passwd". You will be asked both for your old password and to
type your new selected password twice. If you mistype your old password or do not type your new
password the same way twice, the system will indicate that the password has not been changed.
Some system administrators have installed programs that check for appropriateness of password (is it
cryptic enough for reasonable system security). A password change may be rejected by this program.
When choosing a password, it is important that it be something that could not be guessed -- either by
somebody unknown to you trying to break in, or by an acquaintance who knows you. Suggestions for
choosing and using a password follow:

Don't
use a word (or words) in any language
use a proper name
use information that can be found in your wallet
use information commonly known about you (car license, pet name, etc)
use control characters. Some systems can't handle them
write your password anywhere
ever give your password to *anybody*

Do
use a mixture of character types (alphabetic, numeric, special)
use a mixture of upper case and lower case
use at least 6 characters
choose a password you can remember
change your password often
make sure nobody is looking over your shoulder when you are entering your password
Change Password in LINUX

How do I change the password in LINUX?

To modify a user's password or your own password in LINUX use the passwd command. Open the
terminal and then type the passwd command entering the new password, the characters entered do not
display on screen, in order to avoid the password being seen by a passer-by. The passwd command
prompts for the new password twice in order to detect any typing errors. The encrypted password is
stored in /etc/shadow file.

Change Any Users Password


Login as the root user and type the command:
# passwd userName
# passwd vivek
# passwd tom

Sample outputs:
Enter new LINUX password:
Retype new LINUX password:
passwd: password updated successfully

Change Your Own Password


Simply type the passwd command:
$ passwd

Sample outputs:
(current) LINUX password:
Enter new LINUX password:
Retype new LINUX password:
passwd: password updated successfully
Difference between locate and find command in Linux

Two popular commands for locating files on Linux are find and locate. Depending on the size of your
file system and the depth of your search, the find command can sometime take a long time to scan all of
the data. For example, if you search your entire filesystem for the files named data.txt:

# find / -name data.txt

More likely than not, this will take on the order of minutes, if not longer to return. A quicker method is
to use the locate command:

# locate data.txt

However, this efficiency comes at a cost, the data reported in the output of locate isn’t as fresh as the
data reported by the find command. By default, the system will run updatedb which takes a snapshot of
the system files once a day, locate uses this snapshot to quickly report what files are where. However,
recent file additions or removals (within 24 hours) are not recorded in the snapshot and are unknown
to locate.

The find command has a number of options and is very configurable. There are many ways to reduce
the depth and breadth of your search and make it more efficient.

locate uses a previously built database, If database is not updated then locate command will not show
the output. to sync the database it is must to execute updatedb command.

# updatedb
How to Use Wildcards

A wildcard is a character that can be used as a substitute for any of a class of characters in a search,
thereby greatly increasing the flexibility and efficiency of searches.

Wildcards are commonly used in shell commands in Linux and other Unix-like operating systems. A
shell is a program that provides a text-only user interface and whose main function is to execute
commands typed in by users and display their results.

Wildcards are also used in regular expressions and programming languages. Regular expressions are a
pattern matching system that uses strings (i.e., sequences of characters) constructed according to pre-
defined syntax rules to find desired strings in text.

The term wildcard or wild card was originally used in card games to describe a card that can be assigned
any value that its holder desires. However, its usage has spread so that it is now used to describe an
unknown or unpredictable factor in a variety of fields.

Star Wildcard

Three types of wildcards are used with Linux commands. The most frequently employed and usually the
most useful is the star wildcard, which is the same as an asterisk (*). The star wildcard has the broadest
meaning of any of the wildcards, as it can represent zero characters, all single characters or any string.

As an example, the file command provides information about any filesystem object (i.e., file, directory or
link) that is provided to it as an argument (i.e., input). Because the star wildcard represents every string,
it can be used as the argument for file to return information about every object in the specified directory.
Thus, the following would display information about every object in the current directory (i.e., the
directory in which the user is currently working):

file *

If there are no matches, an error message is returned, such as *: can't stat `*' (No such file or directory).. In
the case of this example, the only way that there would be no matches is if the directory were empty.

Wildcards can be combined with other characters to represent parts of strings. For example, to represent
any filesystem object that has a .jpg filename extension, *.jpg would be used. Likewise, a* would
represent all objects that begin with a lower case (i.e., small) letter a.

As another example, the following would tell the ls command (which is used to list files) to provide the
names of all files in the current directory that have an .html or a .txt extension:

ls *.html *.txt

Likewise, the following would tell the rm command (which is used to remove files and directories) to
delete all files in the current directory that have the string xxx in their name:

rm *xxx*
Question Mark Wildcard

The question mark (?) is used as a wildcard character in shell commands to represent exactly one
character, which can be any single character. Thus, two question marks in succession would represent
any two characters in succession, and three question marks in succession would represent any string
consisting of three characters.

Thus, for example, the following would return data on all objects in the current directory whose names,
inclusive of any extensions, are exactly three characters in length:

file ???

And the following would provide data on all objects whose names are one, two or three characters in
length:

file ? ?? ???

As is the case with the star wildcard, the question mark wildcard can be used in combination with other
characters. For example, the following would provide information about all objects in the current
directory that begin with the letter a and are five characters in length:

file a????

The question mark wildcard can also be used in combination with other wildcards when separated by
some other character. For example, the following would return a list of all files in the current directory
that have a three-character filename extension:

ls *.???

Square Brackets Wildcard

The third type of wildcard in shell commands is a pair of square brackets, which can represent any of the
characters enclosed in the brackets. Thus, for example, the following would provide information about all
objects in the current directory that have an x, y and/or z in them:

file *[xyz]*

And the following would list all files that had an extension that begins with x, y or z:

ls *.[xyz]*

The same results can be achieved by merely using the star and question mark wildcards. However, it is
clearly more efficient to use the bracket wildcard.

When a hyphen is used between two characters in the square brackets wildcard, it indicates a range
inclusive of those two characters. For example, the following would provide information about all of the
objects in the current directory that begin with any letter from a through f:
file [a-f]*

And the following would provide information about every object in the current directory whose name
includes at least one numeral:

file *[0-9]*

The use of the square brackets to indicate a range can be combined with its use to indicate a list. Thus, for
example, the following would provide information about all filesystem objects whose names begin with
any letter from a through c or begin with s or t:

file [a-cst]*

Likewise, multiple sets of ranges can be specified. Thus, for instance, the following would return
information about all objects whose names begin with the first three or the final three lower case letters of
the alphabet:

file [a-cx-z]*

Sometimes it can be useful to have a succession of square bracket wildcards. For example, the following
would display all filenames in the current directory that consist of jones followed by a three-digit
number:

ls jones[0-9][0-9][0-9]

Other Wild Cards

\ (backslash) = is used as an "escape" character, i.e. to protect a subsequent special character. Thus, "\\"
searches for a backslash. Note you may need to use quotation marks and backslash(es).

^ (caret) = means "the beginning of the line". So "^a" means find a line starting with an "a".

$ (dollar sign) = means "the end of the line". So "a$" means find a line ending with an "a".

For example, this command searches the file myfile for lines starting with an "s" and ending with an "n",
and prints them to the standard output (screen):

cat myfile | grep '^s.*n$'


Soft Link and Hard Links

Example:
Create two files:
$ touch blah1
$ touch blah2

Enter some data into them:

$ echo "Cat" > blah1


$ echo "Dog" > blah2

And as expected:

$cat blah1; cat blah2


Cat
Dog

Let's create hard and soft links:

$ ln blah1 blah1-hard
$ ln -s blah2 blah2-soft

Let's see what just happened:

$ ls -l

blah1
blah1-hard
blah2
blah2-soft -> blah2

Changing the name of blah1 does not matter:

$ mv blah1 blah1-new
$ cat blah1-hard
Cat

blah1-hard points to the inode, the contents, of the file - that wasn't changed.

$ mv blah2 blah2-new
$ ls blah2-soft
blah2-soft
$ cat blah2-soft
cat: blah2-soft: No such file or directory

The contents of the file could not be found because the soft link points to the name, that was changed,
and not to the contents.
Similarly, If blah1 is deleted, blah1-hard still holds the contents; if blah2 is deleted, blah2-soft is just a link
to a non-existing file.
List folders and files in a directory The command: Information Commands:
Written By: Alexandros Mavridis ls - list directory contents ls --version
ls --help
info ls
Contents man ls

Listing Folders
Options Used In This Document
Non Hidden Folders Page 1
-r, --reverse
reverse order while sorting
Hidden Folders Page 3
-l use a long listing format
Non Hidden And Hidden Folders Page 4
-t sort by modification time, newest
Listing Files
first
Non Hidden Files Page 5
-i, --inode
print the index number of each file
Hidden Files Page 7
-a, --all
Non Hidden And Hidden Files Page 8
do not ignore entries starting with .
Listing Folders and Files
-d, --directory
list directories themselves, not their
Non Hidden Folders and Files Page 9
contents
Hidden Folders And Files Page 13
-p, --indicator-style=slash
append / indicator to directories
Non Hidden And Hidden Folders And Files Page 20
--group-directories-first
Sources Page 23
group directories before files

All the commands in the current document can be | to wc -l


command for printing number of folders or files, instead of
folders and files themselves. For example:

ls -d */ | wc -l
A. Listing Folders

Non hidden folders

Command Output

ls -d */ Prints all non hidden folders in the current


working directory in alphabetical order.
ls -dr */ Prints all non hidden folders in the current
working directory in reverse alphabetical order.
ls -dl */
Prints in detail all non hidden folders in the
ls -l | grep ^d
current working directory in alphabetical order.
ls -l | awk '{if ($1 ~ /d/) print $0}'
ls -dlr */
Prints in detail all non hidden folders in the
ls -lr | grep ^d current working directory in reverse alphabetical
order.
ls -lr | awk '{if ($1 ~ /d/) print $0}'
Prints all non hidden folders in the current
ls -dt */ working directory in chronological order, going
from newest to oldest.
Prints all non hidden folders in the current
ls -dtr */ working directory in reverse chronological
order, going from oldest to newest.
ls -dlt */
Prints in detail all non hidden folders in the
ls -lt | grep ^d current working directory in chronological order,
ls -lt | awk '{if ($1 ~ /d/) print $0}' going from newest to oldest.

ls -dltr */ Prints in detail all non hidden folders in the


ls -ltr | grep ^d current working directory in reverse
chronological order, going from oldest to
ls -ltr | awk '{if ($1 ~ /d/) print $0}' newest.
Prints all non hidden folders in the current
ls -di */ working directory, including inode numbers, in
alphabetical order.
Prints all non hidden folders in the current
ls -dri */ working directory, including inode numbers, in
reverse alphabetical order.
Prints in detail all non hidden folders in the
ls -dli */ current working directory, including inode
numbers, in alphabetical order.
Prints in detail all non hidden folders in the
ls -drli */ current working directory, including inode
numbers, in reverse alphabetical order.
Prints all non hidden folders in the current
ls -dti */ working directory, including inode numbers, in
chronological order, going from newest to oldest.
Prints all non hidden folders in the current working
ls -dtri */ directory, including inode numbers, in reverse
chronological order, going from oldest to newest.
Prints in detail all non hidden folders in the
ls -dlti */ current working directory, including inode
numbers, in chronological order, going from
newest to oldest.
ls -dltri */ Prints in detail all non hidden folders in the current
working directory, including inode numbers, in
reverse chronological order, going from oldest to
newest.
Hidden folders

Command Output

ls -d .*/ Prints all hidden folders in the current working


directory in alphabetical order.
ls -dr .*/ Prints all hidden folders in the current working
directory in reverse alphabetical order.
ls -dl .*/ Prints in detail all hidden folders in the current
working directory in alphabetical order.
ls -dlr .*/ Prints in detail all hidden folders in the current
working directory in reverse alphabetical order.
ls -dt .*/ Prints all hidden folders in the current working
directory in chronological order, going from
newest to oldest.
ls -dtr .*/ Prints all hidden folders in the current working
directory in reverse chronological order, going
from oldest to newest.
ls -dlt .*/ Prints in detail all hidden folders in the current
working directory in chronological order, going
from newest to oldest.
ls -dltr .*/ Prints in detail all hidden folders in the current
working directory in reverse chronological
order, going from oldest to newest.
ls -di .*/ Prints all hidden folders in the current working
directory, including inode numbers, in
alphabetical order.
ls -dri .*/ Prints all hidden folders in the current working
directory, including inode numbers, in reverse
alphabetical order.
ls -dli .*/ Prints in detail all hidden folders in the current
working directory, including inode numbers, in
alphabetical order.
ls -drli .*/ Prints in detail all hidden folders in the current
working directory, including inode numbers, in
reverse alphabetical order.
ls -dti .*/ Prints all hidden folders in the current working
directory, including inode numbers, in
chronological order, going from newest to oldest.
ls -dtri .*/ Prints all hidden folders in the current working
directory, including inode numbers, in reverse
chronological order, going from oldest to
newest.
ls -dlti .*/ Prints in detail all hidden folders in the current
working directory, including inode numbers, in
chronological order, going from newest to
oldest.
ls -dltri .*/ Prints in detail all hidden folders in the current
working directory, including inode numbers, in
reverse chronological order, going from oldest to
newest.

Non hidden and hidden folders

Command Output

ls -d */ .*/ Prints all non hidden and hidden folders in the


current working directory in alphabetical order.
ls -dr */ .*/ Prints all non hidden and hidden folders in the
current working directory in reverse alphabetical
order.
ls -dl */ .*/ Prints in detail all non hidden and hidden folders
in the current working directory in alphabetical
order.
ls -dlr */ .*/ Prints in detail all non hidden and hidden folders
in the current working directory in reverse
alphabetical order.
ls -dt */ .*/ Prints all non hidden and hidden folders in the
current working directory in chronological order,
going from newest to oldest.
ls -dtr */ .*/ Prints all non hidden and hidden folders in the
current working directory in reverse
chronological order, going from oldest to
newest.
ls -dlt */ .*/ Prints in detail all non hidden and hidden folders
in the current working directory in chronological
order, going from newest to oldest.
ls -dltr */ .*/ Prints in detail all non hidden and hidden folders
in the current working directory in reverse
chronological order, going from oldest to
newest.
ls -di */ .*/ Prints all non hidden and hidden folders in the
current working directory, including inode
numbers, in alphabetical order.
ls -dri */ .*/ Prints all non hidden and hidden folders in the
current working directory, including inode
numbers, in reverse alphabetical order.
ls -dli */ .*/ Prints in detail all non hidden and hidden folders
in the current working directory, including inode
numbers, in alphabetical order.
ls -drli */ .*/ Prints in detail all non hidden and hidden folders
in the current working directory, including inode
numbers, in reverse alphabetical order.
ls -dti */ .*/ Prints all non hidden and hidden folders in the
current working directory, including inode
numbers, in chronological order, going from
newest to oldest.
ls -dtri */ .*/ Prints all non hidden and hidden folders in the
current working directory, including inode
numbers, in reverse chronological order, going
from oldest to newest.
ls -dlti */ .*/ Prints in detail all non hidden and hidden folders
in the current working directory, including inode
numbers, in chronological order, going from
newest to oldest.
ls -dltri */ .*/ Prints in detail all non hidden and hidden folders
in the current working directory, including inode
numbers, in reverse chronological order, going
from oldest to newest.

B. Listing Files

Non hidden files

Command Output

ls -p | grep -v / Prints all non hidden files in the current working


directory in alphabetical order.
ls -pr | grep -v / Prints all non hidden files in the current working
directory in reverse alphabetical order.
ls -pl | grep -v /
Prints in detail all non hidden files in the current
ls -l | grep -v ^d
working directory in alphabetical order.
ls -l | grep '^\-'
ls -plr | grep -v /
Prints in detail all non hidden files in the current
ls -lr | grep -v ^d
working directory in reverse alphabetical order.
ls -lr | grep '^\-'
Prints all non hidden files in the current working
ls -pt | grep -v / directory in chronological order, going from
newest to oldest.
Prints all non hidden files in the current working
ls -ptr | grep -v / directory in reverse chronological order, going
from oldest to newest.
ls -plt | grep -v / Prints in detail all non hidden files in the current
ls -lt | grep -v ^d working directory in chronological order, going
ls -lt | grep '^\-' from newest to oldest.

ls -pltr | grep -v / Prints in detail all non hidden files in the current
ls -ltr | grep -v ^d working directory in reverse chronological
ls -lt | grep '^\-' order, going from oldest to newest.

ls -pi | grep -v / Prints all non hidden files in the current working
directory, including inode numbers, in
alphabetical order.
ls -pri | grep -v / Prints all non hidden files in the current working
directory, including inode numbers, in reverse
alphabetical order.
ls -pli | grep -v / Prints in detail all non hidden files in the current
working directory, including inode numbers, in
ls -li | grep -v ^d alphabetical order.
ls -plri | grep -v / Prints in detail all non hidden files in the current
working directory, including inode numbers, in
ls -lri | grep -v ^d reverse alphabetical order.
Prints all non hidden files in the current working
ls -pti | grep -v / directory, including inode numbers, in
chronological order, going from newest to oldest.
Prints all non hidden files in the current working
directory, including inode numbers, in reverse
ls -ptri | grep -v /
chronological order, going from oldest to
newest.
ls -plti | grep -v / Prints in detail all non hidden files in the current
working directory, including inode numbers, in
ls -lti | grep -v ^d chronological order, going from newest to oldest.

ls -pltri | grep -v / Prints in detail all non hidden files in the current
working directory, including inode numbers, in
reverse chronological order, going from oldest to
ls -ltri | grep -v ^d
newest.
Hidden Files

Command Output

ls -d .?* Prints all hidden files in the current working


ls -a | grep '^\.' directory in alphabetical order.
ls -dr .?* Prints all hidden files in the current working
ls -ar | grep '^\.' directory in reverse alphabetical order.
Prints in detail all hidden files in the current
ls -ld .?*
working directory in alphabetical order.
Prints in detail all hidden files in the current
ls -ldr .?*
working directory in reverse alphabetical order.
ls -dt .?* Prints all hidden files in the current working
directory in chronological order, going from
ls -at | grep '^\.' newest to oldest.
ls -dtr .?* Prints all hidden files in the current working
directory in reverse chronological order, going
ls -atr | grep '^\.' from oldest to newest.
Prints in detail all hidden files in the current
ls -dlt .?* working directory in chronological order, going
from newest to oldest.
Prints in detail all hidden files in the current
ls -dltr .?* working directory in reverse chronological
order, going from oldest to newest.
Prints all hidden files in the current working
ls -di .?* directory, including inode numbers, in
alphabetical order.
Prints all hidden files in the current working
ls -dir .?* directory, including inode numbers, in reverse
alphabetical order.
Prints in detail all hidden files in the current
ls -ldi .?* working directory, including inode numbers, in
alphabetical order.
Prints in detail all hidden files in the current
ls -ldri .?* working directory, including inode numbers, in
reverse alphabetical order.
Prints all hidden files in the current working
ls -dti .?* directory, including inode numbers, in
chronological order, going from newest to oldest.
Prints all hidden files in the current working
directory, including inode numbers, in reverse
ls -dtri .?*
chronological order, going from oldest to
newest.
ls -dlti .?* Prints in detail all hidden files in the current
working directory, including inode numbers, in
chronological order, going from newest to oldest.
ls -dltri .?* Prints in detail all hidden files in the current
working directory, including inode numbers, in
reverse chronological order, going from oldest to
newest.

Non hidden and hidden files

Command Output

ls -pa | grep -v / Prints all non hidden and hidden files in the
current working directory in alphabetical order.
ls -pra | grep -v / Prints all non hidden and hidden files in the
current working directory in reverse alphabetical
order.
ls -pla | grep -v / Prints in detail all non hidden and hidden files
ls -la | grep -v ^d in the current working directory in alphabetical
ls -la | grep '^\-' order.
ls -prla | grep -v / Prints in detail all non hidden and hidden files in
ls -rla | grep -v ^d the current working directory in reverse
ls -lra | grep '^\-' alphabetical order.

Prints all non hidden and hidden files in the


ls -pta | grep -v / current working directory in chronological
order, going from newest to oldest.
Prints all non hidden and hidden files in the
current working directory in reverse
ls -ptra | grep -v /
chronological order, going from oldest to
newest.
ls -plta | grep -v / Prints in detail all non hidden and hidden files in
ls -lta | grep -v ^d the current working directory in chronological
ls -lta | grep '^\-' order, going from newest to oldest.

ls -pltra | grep -v / Prints in detail all non hidden and hidden files in
ls -ltra | grep -v ^d the current working directory in reverse
chronological order, going from oldest to
ls -ltra | grep '^\-' newest.
Prints all non hidden and hidden files in the
ls -pai | grep -v / current working directory, including inode
numbers, in alphabetical order.
Prints all non hidden and hidden files in the
ls -prai | grep -v / current working directory, including inode
numbers, in reverse alphabetical order.
ls -plai | grep -v / Prints in detail all non hidden and hidden files in
the current working directory, including inode
numbers, in alphabetical order.
ls -prlai | grep -v / Prints in detail all non hidden and hidden files in
the current working directory, including inode
numbers, in reverse alphabetical order.
ls -ptia | grep -v / Prints all non hidden and hidden files in the
current working directory, including inode
numbers, in chronological order, going from
newest to oldest.
ls -ptrai | grep -v / Prints all non hidden and hidden files in the
current working directory, including inode
numbers, in reverse chronological order, going
from oldest to newest.
ls -pltai | grep -v / Prints in detail all non hidden and hidden files in
the current working directory, including inode
numbers, in chronological order, going from
newest to oldest.
ls -pltrai | grep -v / Prints in detail all non hidden and hidden files in
the current working directory, including inode
numbers, in reverse chronological order, going
from oldest to newest.

C. Listing Folders And Files


Non hidden folders and files

Command Output

ls Prints all non hidden folders and files in the


current working directory in alphabetical order.
ls --group-directories-first Prints all non hidden folders in the current
working directory in alphabetical order,
followed by all non hidden files in the current
working directory in alphabetical order.
ls -r Prints all non hidden folders and files in the current
working directory in reverse alphabetical order.
ls -r --group-directories-first Prints all non hidden folders in the current
working directory in reverse alphabetical order,
followed by all non hidden files in the current
working directory in reverse alphabetical order.
ls -l Prints in detail all non hidden folders and files in
the current working directory in alphabetical
order.
ls -l --group-directories-first Prints in detail all non hidden folders in the
current working directory in alphabetical order,
followed by all non hidden files in the current
working directory in alphabetical order.
ls -lr Prints in detail all non hidden folders and files in
the current working directory in reverse
alphabetical order.
ls -lr --group-directories-first Prints in detail all non hidden folders in the
current working directory in reverse alphabetical
order, followed by all non hidden files in the
current working directory in reverse alphabetical
order.
ls -t Prints all non hidden folders and files in the
current working directory in chronological order,
going from newest to oldest.
ls -t --group-directories-first Prints all non hidden folders in the current
working directory in chronological order, going
from newest to oldest, followed by all non
hidden files in the current working directory in
chronological order, going from newest to
oldest.
ls -tr Prints all non hidden folders and files in the
current working directory in reverse
chronological order, going from oldest to
newest.
ls -tr --group-directories-first Prints all non hidden folders in the current
working directory in reverse chronological
order, going from oldest to newest, followed by
all non hidden files in the current working
directory in reverse chronological order, going
from oldest to newest.
ls -lt Prints in detail all non hidden folders and files in
the current working directory in chronological
order, going from newest to oldest.
ls -lt --group-directories-first Prints in detail all non hidden folders in the
current working directory in chronological order,
going from newest to oldest, followed by all non
hidden files in the current working directory in
chronological order, going from newest to
oldest.
ls -ltr Prints in detail all non hidden folders and files in
the current working directory in reverse
chronological order, going from oldest to newest.
ls -ltr --group-directories-first Prints in detail all non hidden folders in the
current working directory in reverse
chronological order, going from oldest to newest,
followed by all non hidden files in the current
working directory in reverse chronological order,
going from oldest to newest.
ls -i Prints all non hidden folders and files in the
current working directory, including inode
numbers, in alphabetical order.
ls -i --group-directories-first Prints all non hidden folders in the current
working directory, including inode numbers, in
alphabetical order, followed by all non hidden
files in the current working directory, including
inode numbers, in alphabetical order.
ls -ri Prints all non hidden folders and files in the
current working directory, including inode
numbers, in reverse alphabetical order.
ls -ri --group-directories-first Prints all non hidden folders in the current
working directory, including inode numbers, in
reverse alphabetical order, followed by all non
hidden files in the current working directory,
including inode numbers, in reverse alphabetical
order.
ls -li Prints in detail all non hidden folders and files in
the current working directory, including inode
numbers, in alphabetical order.
ls -li --group-directories-first Prints in detail all non hidden folders in the
current working directory, including inode
numbers, in alphabetical order, followed by all
non hidden files in the current working
directory, including inode numbers, in
alphabetical order.
ls -lri Prints in detail all non hidden folders and files in
the current working directory, including inode
numbers, in reverse alphabetical order.
ls -lri --group-directories-first Prints in detail all non hidden folders in the
current working directory, including inode
numbers, in reverse alphabetical order, followed
by all non hidden files in the current working
directory, including inode numbers, in reverse
alphabetical order.
ls -ti Prints all non hidden folders and files in the
current working directory, including inode
numbers, in chronological order, going from
newest to oldest.
ls -ti --group-directories-first Prints all non hidden folders in the current
working directory, including inode numbers, in
chronological order, going from newest to
oldest, followed by all non hidden files in the
current working directory, including inode
numbers, in chronological order, going from
newest to oldest.
ls -tri Prints all non hidden folders and files in the
current working directory, including inode
numbers, in reverse chronological order, going
from oldest to newest.
ls -tri --group-directories-first Prints all non hidden folders in the current
working directory, including inode numbers, in
reverse chronological order, going from oldest to
newest, followed by all non hidden files in the
current working directory, including inode
numbers, in reverse chronological order, going
from oldest to newest.
ls -lti Prints in detail all non hidden folders and files in
the current working directory, including inode
numbers, in chronological order, going from
newest to oldest.
ls -lti --group-directories-first Prints in detail all non hidden folders in the
current working directory, including inode
numbers, in chronological order, going from
newest to oldest, followed by all non hidden
files in the current working directory, including
inode numbers, in chronological order, going
from newest to oldest.
ls -ltri Prints in detail all non hidden folders and files in
the current working directory, including inode
numbers, in reverse chronological order, going
from oldest to newest.
ls -ltri --group-directories-first Prints in detail all non hidden folders in the
current working directory, including inode
numbers, in reverse chronological order, going
from oldest to newest, followed by all non
hidden files in the current working directory,
including inode numbers, in reverse
chronological order, going from oldest to
newest.
Hidden folders and files

Command Output

ls -d .* Prints all hidden folders and files in the current


working directory in alphabetical order.
ls -d .[^.]* Prints all hidden folders and files in the current
working directory in alphabetical order. Returns
an error if at least one hidden folder or at least
one hidden file does not exist.
ls -d .* --group-directories-first Prints all hidden folders in the current working
directory in alphabetical order, followed by all
hidden files in the current working directory in
alphabetical order.
ls -d .[^.]* --group-directories-first Prints all hidden folders in the current working
directory in alphabetical order, followed by all
hidden files in the current working directory in
alphabetical order. Returns an error if at least
one hidden folder or at least one hidden file
does not exist.
ls -dr .* Prints all hidden folders and files in the current
working directory in reverse alphabetical order.
ls -dr .[^.]* Prints all hidden folders and files in the current
working directory in reverse alphabetical order.
Returns an error if at least one hidden folder or
at least one hidden file does not exist.
ls -dr .* --group-directories-first Prints all hidden folders in the current working
directory in reverse alphabetical order, followed
by all hidden files in the current working
directory in reverse alphabetical order.
ls -dr .[^.]* --group-directories-first Prints all hidden folders in the current working
directory in reverse alphabetical order, followed
by all hidden files in the current working
directory in reverse alphabetical order. Returns
an error if at least one hidden folder or at least
one hidden file does not exist.
ls -dl .* Prints in detail all hidden folders and files in the
current working directory in alphabetical order.
ls -dl .[^.]* Prints in detail all hidden folders and files in the
current working directory in alphabetical order.
Returns an error if at least one hidden folder or
at least one hidden file does not exist.
ls -dl .* --group-directories-first Prints in detail all hidden folders in the current
working directory in alphabetical order,
followed by all hidden files in the current
working directory in alphabetical order.
ls -dl .[^.]* --group-directories-first Prints in detail all hidden folders in the current
working directory in alphabetical order,
followed by all hidden files in the current
working directory in alphabetical order. Returns
an error if at least one hidden folder or at least
one hidden file does not exist.
ls -dlr .* Prints in detail all hidden folders and files in the
current working directory in reverse alphabetical
order.
ls -dlr .[^.]* Prints in detail all hidden folders and files in the
current working directory in reverse alphabetical
order. Returns an error if at least one hidden
folder or at least one hidden file does not exist.
ls -dlr .* --group-directories-first Prints in detail all hidden folders in the current
working directory in reverse alphabetical order,
followed by all hidden files in the current
working directory in reverse alphabetical order.
ls -dlr .[^.]* --group-directories-first Prints in detail all hidden folders in the current
working directory in reverse alphabetical order,
followed by all hidden files in the current
working directory in reverse alphabetical order.
Returns an error if at least one hidden folder or
at least one hidden file does not exist.
ls -dt .* Prints all hidden folders and files in the current
working directory in chronological order, going
from newest to oldest.
ls -dt .[^.]* Prints all hidden folders and files in the current
working directory in chronological order, going
from newest to oldest. Returns an error if at least
one hidden folder or at least one hidden file does
not exist.
ls -dt .* --group-directories-first Prints all hidden folders in the current working
directory in chronological order, going from
newest to oldest, followed by all hidden files in
the current working directory in chronological
order, going from newest to oldest.
ls -dt .[^.]* --group-directories-first Prints all hidden folders in the current working
directory in chronological order, going from
newest to oldest, followed by all hidden files in
the current working directory in chronological
order, going from newest to oldest. Returns an
error if at least one hidden folder or at least one
hidden file does not exist.
ls -dtr .* Prints all hidden folders and files in the current
working directory in reverse chronological
order, going from oldest to newest.
ls -dtr .[^.]* Prints all hidden folders and files in the current
working directory in reverse chronological
order, going from oldest to newest. Returns an
error if at least one hidden folder or at least one
hidden file does not exist.
ls -dtr .* --group-directories-first Prints all hidden folders in the current working
directory in reverse chronological order, going
from oldest to newest, followed by all hidden
files in the current working directory in reverse
chronological order, going from oldest to
newest.
ls -dtr .[^.]* --group-directories-first Prints all hidden folders in the current working
directory in reverse chronological order, going
from oldest to newest, followed by all hidden
files in the current working directory in reverse
chronological order, going from oldest to
newest. Returns an error if at least one hidden
folder or at least one hidden file does not exist.
ls -dtl .* Prints in detail all hidden folders and files in the
current working directory in chronological order,
going from newest to oldest.
ls -dtl .[^.]* Prints in detail all hidden folders and files in the
current working directory in chronological order,
going from newest to oldest. Returns an error if
at least one hidden folder or at least one hidden
file does not exist.
ls -dtl .* --group-directories-first Prints in detail all hidden folders in the current
working directory in chronological order, going
from newest to oldest, followed by all hidden
files in the current working directory in
chronological order, going from newest to
oldest.
ls -dtl .[^.]* --group-directories-first Prints in detail all hidden folders in the current
working directory in chronological order, going
from newest to oldest, followed by all hidden
files in the current working directory in
chronological order, going from newest to
oldest. Returns an error if at least one hidden
folder or at least one hidden file does not exist.
ls -dtrl .* Prints in detail all hidden folders and files in the
current working directory in reverse
chronological order, going from oldest to newest.
ls -dtrl .[^.]* Prints in detail all hidden folders and files in the
current working directory in reverse
chronological order, going from oldest to newest.
Returns an error if at least one hidden folder or
at least one hidden file does not exist.
ls -dtrl .* --group-directories-first Prints in detail all hidden folders in the current
working directory in reverse chronological
order, going from oldest to newest, followed by
all hidden files in the current working directory
in reverse chronological order, going from oldest
to newest.
ls -dtrl .[^.]* --group-directories-first Prints in detail all hidden folders in the current
working directory in reverse chronological
order, going from oldest to newest, followed by
all hidden files in the current working directory
in reverse chronological order, going from oldest
to newest. Returns an error if at least one hidden
folder or at least one hidden file does not exist.
ls -di .* Prints all hidden folders and files in the current
working directory, including inode numbers, in
alphabetical order.
ls -di .[^.]* Prints all hidden folders and files in the current
working directory, including inode numbers, in
alphabetical order. Returns an error if at least
one hidden folder or at least one hidden file does
not exist.
ls -di .* --group-directories-first Prints all hidden folders in the current working
directory, including inode numbers, in
alphabetical order, followed by all hidden files
in the current working directory, including inode
numbers, in alphabetical order.
ls -di .[^.]* --group-directories-first Prints all hidden folders in the current working
directory, including inode numbers, in
alphabetical order, followed by all hidden files
in the current working directory, including inode
numbers, in alphabetical order. Returns an error
if at least one hidden folder or at least one
hidden file does not exist.
ls -dri .* Prints all hidden folders and files in the current
working directory, including inode numbers, in
reverse alphabetical order.
ls -dri .[^.]* Prints all hidden folders and files in the current
working directory, including inode numbers, in
reverse alphabetical order. Returns an error if at
least one hidden folder or at least one hidden file
does not exist.
ls -dri .* --group-directories-first Prints all hidden folders in the current working
directory, including inode numbers, in reverse
alphabetical order, followed by all hidden files
in the current working directory, including inode
numbers, in reverse alphabetical order.
ls -dri .[^.]* --group-directories-first Prints all hidden folders in the current working
directory, including inode numbers, in reverse
alphabetical order, followed by all hidden files
in the current working directory, including inode
numbers, in reverse alphabetical order. Returns
an error if at least one hidden folder or at least
one hidden file does not exist.
ls -dli .* Prints in detail all hidden folders and files in the
current working directory, including inode
numbers, in alphabetical order.
ls -dli .[^.]* Prints in detail all hidden folders and files in the
current working directory, including inode
numbers, in alphabetical order. Returns an error
if at least one hidden folder or at least one
hidden file does not exist.
ls -dli .* --group-directories-first Prints in detail all hidden folders in the current
working directory, including inode numbers, in
alphabetical order, followed by all hidden files
in the current working directory, including inode
numbers, in alphabetical order.
ls -dli .[^.]* --group-directories-first Prints in detail all hidden folders in the current
working directory, including inode numbers, in
alphabetical order, followed by all hidden files
in the current working directory, including inode
numbers, in alphabetical order. Returns an error
if at least one hidden folder or at least one
hidden file does not exist.
ls -dlri .* Prints in detail all hidden folders and files in the
current working directory, including inode
numbers, in reverse alphabetical order.
ls -dlri .[^.]* Prints in detail all hidden folders and files in the
current working directory, including inode
numbers, in reverse alphabetical order. Returns
an error if at least one hidden folder or at least
one hidden file does not exist.
ls -dlri .* --group-directories-first Prints in detail all hidden folders in the current
working directory, including inode numbers, in
reverse alphabetical order, followed by all hidden
files in the current working directory, including
inode numbers, in reverse alphabetical order.
ls -dlri .[^.]* --group-directories-first Prints in detail all hidden folders in the current
working directory, including inode numbers, in
reverse alphabetical order, followed by all hidden
files in the current working directory, including
inode numbers, in reverse alphabetical order.
Returns an error if at least one hidden folder or
at least one hidden file does not exist.
ls -dti .* Prints all hidden folders and files in the current
working directory, including inode numbers, in
chronological order, going from newest to
oldest.
ls -dti .[^.]* Prints all hidden folders and files in the current
working directory, including inode numbers, in
chronological order, going from newest to
oldest. Returns an error if at least one hidden
folder or at least one hidden file does not exist.
ls -dti .* --group-directories-first Prints all hidden folders in the current working
directory, including inode numbers, in
chronological order, going from newest to
oldest, followed by all hidden files in the current
working directory, including inode numbers, in
chronological order, going from newest to
oldest.
ls -dti .[^.]* --group-directories-first Prints all hidden folders in the current working
directory, including inode numbers, in
chronological order, going from newest to
oldest, followed by all hidden files in the current
working directory, including inode numbers, in
chronological order, going from newest to
oldest. Returns an error if at least one hidden
folder or at least one hidden file does not exist.
ls -dtri .* Prints all hidden folders and files in the current
working directory, including inode numbers, in
reverse chronological order, going from oldest to
newest.
ls -dtri .[^.]* Prints all hidden folders and files in the current
working directory, including inode numbers, in
reverse chronological order, going from oldest to
newest. Returns an error if at least one hidden
folder or at least one hidden file does not exist.
ls -dtri .* --group-directories-first Prints all hidden folders in the current working
directory, including inode numbers, in reverse
chronological order, going from oldest to newest,
followed by all hidden files in the current working
directory, including inode numbers, in reverse
chronological order, going from oldest to newest.
ls -dtri .[^.]* --group-directories-first Prints all hidden folders in the current working
directory, including inode numbers, in reverse
chronological order, going from oldest to newest,
followed by all hidden files in the current working
directory, including inode numbers, in reverse
chronological order, going from oldest to newest.
Returns an error if at least one hidden folder or at
least one hidden file does not exist.
ls -dtli .* Prints in detail all hidden folders and files in the
current working directory, including inode
numbers, in chronological order, going from
newest to oldest.
ls -dtli .[^.]* Prints in detail all hidden folders and files in the
current working directory, including inode
numbers, in chronological order, going from
newest to oldest. Returns an error if at least one
hidden folder or at least one hidden file does not
exist.
ls -dtli .* --group-directories-first Prints in detail all hidden folders in the current
working directory, including inode numbers, in
chronological order, going from newest to
oldest, followed by all hidden files in the current
working directory, including inode numbers, in
chronological order, going from newest to
oldest.
ls -dtli .[^.]* --group-directories-first Prints in detail all hidden folders in the current
working directory, including inode numbers, in
chronological order, going from newest to
oldest, followed by all hidden files in the current
working directory, including inode numbers, in
chronological order, going from newest to
oldest. Returns an error if at least one hidden
folder or at least one hidden file does not exist.
ls -dtrli .* Prints in detail all hidden folders and files in the
current working directory, including inode
numbers, in reverse chronological order, going
from oldest to newest.
ls -dtrli .[^.]* Prints in detail all hidden folders and files in the
current working directory, including inode
numbers, in reverse chronological order, going
from oldest to newest. Returns an error if at least
one hidden folder or at least one hidden file does
not exist.
ls -dtrli .* --group-directories-first Prints in detail all hidden folders in the current
working directory, including inode numbers, in
reverse chronological order, going from oldest to
newest, followed by all hidden files in the
current working directory, including inode
numbers, in reverse chronological order, going
from oldest to newest.
ls -dtrli .[^.]* --group-directories-first Prints in detail all hidden folders in the current
working directory, including inode numbers, in
reverse chronological order, going from oldest to
newest, followed by all hidden files in the
current working directory, including inode
numbers, in reverse chronological order, going
from oldest to newest. Returns an error if at least
one hidden folder or at least one hidden file does
not exist.

Non hidden and hidden folders and files

Command Output

ls -a Prints all non hidden and hidden folders and


files in the current working directory in
alphabetical order.
ls -a --group-directories-first Prints all non hidden and hidden folders in the
current working directory in alphabetical order,
followed by all non hidden and hidden files in
the current working directory in alphabetical
order.
ls -ar Prints all non hidden and hidden folders and
files in the current working directory in reverse
alphabetical order.
ls -ar --group-directories-first Prints all non hidden and hidden folders in the
current working directory in reverse alphabetical
order, followed by all non hidden and hidden
files in the current working directory in reverse
alphabetical order.
ls -la Prints in detail all non hidden and hidden folders
and files in the current working directory in
alphabetical order.
ls -la --group-directories-first Prints in detail all non hidden and hidden folders
in the current working directory in alphabetical
order, followed by all non hidden and hidden
files in the current working directory in
alphabetical order.
ls -lra Prints in detail all non hidden and hidden folders
and files in the current working directory in
reverse alphabetical order.
ls -lra --group-directories-first Prints in detail all non hidden and hidden folders in
the current working directory in reverse alphabetical
order, followed by all non hidden and hidden files in
the current working directory in reverse alphabetical
order.
ls -ta Prints all non hidden and hidden folders and
files in the current working directory in
chronological order, going from newest to
oldest.
ls -ta --group-directories-first Prints all non hidden and hidden folders in the
current working directory in chronological order,
going from newest to oldest, followed by all non
hidden and hidden files in the current working
directory in chronological order, going from
newest to oldest.
ls -tra Prints all non hidden and hidden folders and
files in the current working directory in reverse
chronological order, going from oldest to
newest.
ls -tra --group-directories-first Prints all non hidden and hidden folders in the
current working directory in reverse
chronological order, going from oldest to
newest, followed by all non hidden and hidden
files in the current working directory in reverse
chronological order, going from oldest to
newest.
ls -tla Prints in detail all non hidden and hidden folders
and files in the current working directory in
chronological order, going from newest to
oldest.
ls -tla --group-directories-first Prints in detail all non hidden and hidden folders
in the current working directory in chronological
order, going from newest to oldest, followed by
all non hidden and hidden files in the current
working directory in chronological order, going
from newest to oldest.
ls -trla Prints in detail all non hidden and hidden folders
and files in the current working directory in
reverse chronological order, going from oldest to
newest.
ls -trla --group-directories-first Prints in detail all non hidden and hidden folders
in the current working directory in reverse
chronological order, going from oldest to
newest, followed by all non hidden and hidden
files in the current working directory in reverse
chronological order, going from oldest to
newest.
ls -ia Prints all non hidden and hidden folders and
files in the current working directory, including
inode numbers, in alphabetical order.
ls -ia --group-directories-first Prints all non hidden and hidden folders in the
current working directory, including inode
numbers, in alphabetical order, followed by all
non hidden and hidden files in the current
working directory, including inode numbers, in
alphabetical order.
ls -iar Prints all non hidden and hidden folders and
files in the current working directory, including
inode numbers, in reverse alphabetical order.
ls -iar --group-directories-first Prints all non hidden and hidden folders in the
current working directory, including inode
numbers, in reverse alphabetical order, followed
by all non hidden and hidden files in the current
working directory, including inode numbers, in
reverse alphabetical order.
ls -ial Prints in detail all non hidden and hidden folders
and files in the current working directory,
including inode numbers, in alphabetical order.
ls -ial --group-directories-first Prints in detail all non hidden and hidden folders
in the current working directory, including inode
numbers, in alphabetical order, followed by all
non hidden and hidden files in the current
working directory, including inode numbers, in
alphabetical order.
ls -lrai Prints in detail all non hidden and hidden folders
and files in the current working directory,
including inode numbers, in reverse alphabetical
order.
ls -lrai --group-directories-first Prints in detail all non hidden and hidden folders
in the current working directory, including inode
numbers, in reverse alphabetical order, followed
by all non hidden and hidden files in the current
working directory, including inode numbers, in
reverse alphabetical order.
ls -tia Prints all non hidden and hidden folders and files
in the current working directory, including inode
numbers, in chronological order, going from
newest to oldest.
ls -tia --group-directories-first Prints all non hidden and hidden folders in the
current working directory, including inode numbers,
in chronological order, going from newest to oldest,
followed by all non hidden and hidden files in the
current working directory, including inode numbers,
in chronological order, going from newest to oldest.
ls -tiar Prints all non hidden and hidden folders and files in
the current working directory, including inode
numbers, in reverse chronological order, going from
oldest to newest.
ls -tiar --group-directories-first Prints all non hidden and hidden folders in the
current working directory, including inode
numbers, in reverse chronological order, going
from oldest to newest, followed by all non
hidden and hidden files in the current working
directory, including inode numbers, in reverse
chronological order, going from oldest to
newest.
ls -tlia Prints in detail all non hidden and hidden folders
and files in the current working directory in
chronological order, going from newest to
oldest.
ls -tlia --group-directories-first Prints in detail all non hidden and hidden folders
in the current working directory, including inode
numbers, in chronological order, going from
newest to oldest, followed by all non hidden and
hidden files in the current working directory,
including inode numbers, in chronological order,
going from newest to oldest.
ls -tlari Prints in detail all non hidden and hidden folders
and files in the current working directory,
including inode numbers, in reverse
chronological order, going from oldest to
newest.
ls -tlari --group-directories-first Prints in detail all non hidden and hidden folders
in the current working directory, including inode
numbers, in reverse chronological order, going
from oldest to newest, followed by all non
hidden and hidden files in the current working
directory, including inode numbers, in reverse
chronological order, going from oldest to newest.

_____________________

Sources:
https://stackoverflow.com/questions/14352290/listing-only-directories-using-ls-in-bash-an-examination

https://serverfault.com/questions/368370/how-do-i-exclude-directories-when-listing-files

https://www.cyberciti.biz/faq/bash-shell-display-only-hidden-dot-files/

https://askubuntu.com/questions/468901/how-to-show-only-hidden-files-in-terminal

Imran Afzal for the command: ls -a | grep '^\.'


________________________________________________________________________________
LEFT CLICK TO RESTART FROM THE TOP
Linux Command Line Structure

A command is a program that tells the Linux system to do something. It has the form:
command [options] [arguments]

where an argument indicates on what the command is to perform its action, usually a file or series of
files. An option modifies the command, changing the way it performs. Commands are case sensitive.
command and Commands are not the same.

Options are generally preceded by a hyphen (-), and for most commands, more than one option can be
strung together, in the form:
command -[option][option][option]
e.g.:
ls –alR = will perform a long list on all files in the current directory and recursively
perform the list through all sub-directories.

For most commands you can separate the options, preceding each with a hyphen, e.g.:
command -option1 -option2 -option3
as in: ls -a -l -R

Some commands have options that require parameters. Options requiring parameters are usually
specified separately, e.g.:
lpr -Pprinter3 -# 2 file
will send 2 copies of file to printer3.

These are the standard conventions for commands. However, not all Linux commands will follow the
standard. Some don’t require the hyphen before options and some won’t let you group options
together, i.e. they may require that each option be preceded by a hyphen and separated by whitespace
from other options and arguments.

Options and syntax for a command are listed in the man page for the command.
File Permissions

• UNIX is a multi-user system. Every file and directory in your account can be protected from or
made accessible to other users by changing its access permissions. Every user has responsibility
for controlling access to their files.

• Permissions for a file or directory may be restricted to by types

• There are 3 type of permissions


• r - read
• w - write
• x - execute = running a program

• Each permission (rwx) can be controlled at three levels:


• u - user = yourself
• g - group = can be people in the same project
• o - other = everyone on the system

• File or Directory permission can be displayed by running ls –l command


• -rwxrwxrwx

• Command to change permission


• chmod

Example:

Type User Group Everyone else

rwx rwx rwx

- = First dash or bit identifies the file type


--- = 2nd 3 bits defines the permission for user (file or dir owner)
--- = 3rd 3 bits defines the permission for group
--- = 4th 3 bits defines the permission for everyone else
Permissions can also be change through numerical method. Each of the permission types is represented
by either a numeric equivalent:
read=4, write=2, execute=1
or a single letter:
read=r, write=w, execute=x

A permission of 4 or r would specify read permissions. If the permissions desired are read and write,
the 4 (representing read) and the 2 (representing write) are added together to make a permission of 6.
Therefore, a permission setting of 6 would allow read and write permissions.

Common Options
-f force (no error message is generated if the change is unsuccessful)
-R recursively descend through the directory structure and change the modes

Examples
If the permission desired for file1 is user: read, write, execute, group: read, execute, other: read,
execute, the command to use would be

chmod 755 file1 or chmod u=rwx,go=rx file1

Reminder: When giving permissions to group and other to use a file, it is necessary to allow at least
execute permission to the directories for the path in which the file is located. The easiest way to do
this is to be in the directory for which permissions need to be granted:

chmod 711 . or chmod u=rw,+x . or chmod u=rwx,go=x .

where the dot (.) indicates this directory.

File Ownership

chown - change ownership


Ownership of a file can be changed with the chown command. On most versions of Unix this can
only be done by the super-user, i.e. a normal user can’t give away ownership of their files. chown is
used as below, where # represents the shell prompt for the super-user:

Syntax
chown [options] user[:group] file (SVR4)
chown [options] user[.group] file (BSD)

Common Options
-R recursively descend through the directory structure
-f force, and don’t report any errors

Examples
# chown new_owner file

chgrp - change group


Anyone can change the group of files they own, to another group they belong to, with the chgrp
command.

Syntax
chgrp [options] group file

Common Options
-R recursively descend through the directory structure
-f force, and don’t report any errors

Examples
% chgrp
Getting Help

 The "man" command

o The "man" command man gives you access to an on-line manual which potentially contains a
complete description of every command available on the system. In practice, the manual
usually contains a subset of all commands.
o man can also provide you with one line descriptions of commands which match a specified
keyword
o The online manual is divided into sections:

Section Description
------- -----------
1 User Commands
2 System Commands
3 Subroutines
4 Devices
5 File Formats
6 Games
7 Miscellaneous
8 System Administration
l Local Commands
n New Commands

o Examples of using the man command:

To display the manual page for the cp (copy files) command:


man cp
--More--23% at the bottom left of the screen means that only 23% of the man page is
displayed. Press the space bar to display more of it or type q to quit.

By default, the man page in section 1 is displayed if multiple sections exist. You can access a
different section by specifying the section. For example:
man 8 telnetd

Keyword searching: use the -k option followed by the keyword. Two examples appear below.
man -k mail
man -k 'copy files'

To view a one line description of what a command does:


whatis more
will display what the "more" command does:
more, page (1) - browse or page through a text file

 who - shows who is on the system


who
who am i

 finger - displays information about users, by name or login name


finger doe
finger userid
Prompting completion

The following example shows how command-line completion works in Bash. Other command line shells
may perform slightly differently.

First we type the first three letters of our command:

fir

Then we press Tab ↹ and because the only command in our system that starts with "fir" is "firefox", it
will be completed to:

firefox

Then we start typing the file name:

firefox i

But this time introduction-to-command-line-completion.html is not the only file in the current directory
that starts with "i". The directory also contains files introduction-to-bash.html and introduction-to-
firefox.html. The system can't decide which of these filenames we wanted to type, but it does know that
the file must begin with "introduction-to-", so the command will be completed to:

firefox introduction-to-

Now we type "c":

firefox introduction-to-c

After pressing Tab ↹ it will be completed to the whole filename:

firefox introduction-to-command-line-completion.html

In short we typed:

fir Tab ↹ i Tab ↹ c Tab ↹

This is just eight keystrokes, which is considerably less than 52 keystrokes we would have needed to type
without using command-line completion.
Rotating completion

The following example shows how command-line completion works with rotating completion, such as
Windows's CMD uses.

We follow the same procedure as for prompting completion until we have:

firefox i

We press Tab ↹ once, with the result:

firefox introduction-to-bash.html

We press Tab ↹ again, getting:

firefox introduction-to-command-line-completion.html

In short we typed:

fir Tab ↹ i Tab ↹Tab ↹


Adding Text to Files:

echo command
 echo “Your text goes here” > filename (To add text and create a new file)
 echo “Additional text” >> filename (To append to an existing file)

cp command
 cp exisiting-file new-filename (To copy an existing file to new file)
 cat existing-file > new-filename (cat the content of an existing file and add to new file. This
command does the same as above)

vi command
 vi filename (Create a new file and enter text using vi insert mode)
Pipes

 A pipe is used by the shell to connect the stdout of one command directly to the stdin of another
command.
 The symbol for a pipe is the vertical bar ( | ). The command syntax is:

command1 [arguments] | command2 [arguments]

 Pipes accomplish with one command what otherwise would take intermediate files and multiple
commands. For example, operation 1 and operation 2 are equivalent:

Operation 1
who > temp
sort temp

Operation 2
who | sort

 Pipes do not affect the contents of the input files.


 Two very common uses of a pipe are with the "more" and "grep" utilities. Some examples:

ls -al | more
who | more
ps ug | grep myuserid
who | grep kelly
File Maintenance Commands

cp
Copies files. Will overwrite unless otherwise specified. Must also have write permission in the
destination directory.

Example:
cp sample.f sample2.f - copies sample.f to sample2.f
cp -R dir1 dir2 - copies contents of directory dir1 to dir2
cp -i file.1 file.new - prompts if file.new will be overwritten
cp *.txt chapt1 - copies all files with .txt suffix to directory
chapt1
cp /usr/doc/README ~ - copies file to your home directory
cp ~betty/index . - copies the file "index" from user betty's
home directory to current directory

rm
Deletes/removes files or directories if file permissions permit

Example:
rm sample.f - deletes sample.f
rm chap?.txt - deletes all files with chap as the first four
characters of their name and with .txt as the last
four characters of their name
rm -i * - deletes all files in current directory but asks
first for each file
rm -r /olddir - recursively removes all files in the directory
olddir, including the directory itself

mv
Moves files. It will overwrite unless otherwise specified. Must also have write permission in the
destination directory.

Example:
mv sample.f sample2.f - moves sample.f to sample2.f
mv dir1 newdir/dir2 - moves contents of directory dir1 to
newdir/dir2
mv -i file.1 file.new - prompts if file.new will be overwritten
mv *.txt chapt1 - moves all files with .txt suffix to
directory chapt1
mkdir
Make directory. Will create the new directory in your working directory by default.

Example:
mkdir /u/training/data
mkdir data2

rmdir
Remove directory. Directories must be empty before you remove them.
rmdir project1

To recursively remove nested directories, use the rm command with the -r option:
rm -r dirctory_name

chgrp
Changes the group ownership of a file or directory.

Syntax
chgrp [ -f ] [ -h ] [-R ] Group { File ... | Directory ... }
chgrp -R [ -f ] [ -H | -L | -P ] Group { File... | Directory... }

Description
The chgrp command changes the group of the file or directory specified by the File or Directory parameter
to the group specified by the Group parameter. The value of the Group parameter can be a group name
from the group database or a numeric group ID. When a symbolic link is encountered and you have not
specified the -h or -P flags, the chgrp command changes the group ownership of the file or directory
pointed to by the link and not the group ownership of the link itself.

chown
The chown command is used to change the owner and group of files, directories and links. By default,
the owner of a filesystem object is the user that created it. The group is a set of users that share the same
access permissions (i.e., read, write and execute) for that object. The basic syntax for using chown to
change owners is

chown [options] new_owner object(s)

new_owner is the user name or the numeric user ID (UID) of the new owner, and object is the name of
the target file, directory or link. The ownership of any number of objects can be changed simultaneously.

For example, the following would transfer the ownership of a file named file1 and a directory named dir1
to a new owner named alice:
chown alice file1 dir1

In order to perform the above command, most systems are configured by default to require access to the
root (i.e., system administrator) account, which can be obtained on a personal computer by using the su
(i.e., substitute user) command. An error message will be returned in the event that the user does not
have the proper permissions or that the specified new owner or target(s) does not exist (or is spelled
incorrectly).

The ownership and group of a filesystem object can be confirmed by using the ls command with its -l (i.e.,
long) option. The owner is shown in the third column and the group in the fourth. Thus, for example, the
owner and group of file1 can be seen by using the following:

ls -l file1

The basic syntax for using chown to change groups is

chown [options] :new_group object(s)

or

chown [options] .new_group object(s)

The only difference between the two versions is that the name or numeric ID of the new group is
preceded directly by a colon in the former and by a dot in the latter; there is no functional difference. In
this case, chown performs the same function as the chgrp (i.e., change group) command.

The owner and group can be changed simultaneously by combining the syntaxes for changing owner and
group. That is, the name or UID of the new owner is followed directly (i.e., with no intervening spaces)
by a period or colon, which is followed directly by the name or numeric ID of the new group, which, in
turn, is followed by a space and then by the names of the target files, directories and/or links.

Thus, for example, the following would change the owner of a file named file2 to the user with the user
name bob and change its group to group2:

chown bob:group2 file2

If a user name or UID is followed directly by a colon or dot but no group name is provided, then the
group is changed to that user's login group. Thus, for example, the following would change the
ownership of file3 to cathy and would also change that file's group to the login group of the new owner
(which by default is usually the same as the new owner):
chown cathy: file3

Among chown's few options is -R, which operates on filesystem objects recursively. That is, when used
on a directory, it can change the ownership and/or group of all objects within the directory tree beginning
with that directory rather than just the ownership of the directory itself.

The -v (verbose) option provides information about every object processed. The -c is similar, but reports
only when a change is made. The --help option displays the documentation found in the man online
manual, and the --version option outputs version information

chmod
Change access permissions, change mode.

Syntax
chmod [Options]... Mode [,Mode]... file...
chmod [Options]... Numeric_Mode file...
chmod [Options]... --reference=RFile file...

Options
-f, --silent, --quiet suppress most error messages
-v, --verbose output a diagnostic for every file processed
-c, --changes like verbose but report only when a change is made
--reference=RFile use RFile's mode instead of MODE values
-R, --recursive change files and directories recursively
--help display help and exit
--version output version information and exit

chmod changes the permissions of each given file according to mode, where mode describes the
permissions to modify. Mode can be specified with octal numbers or with letters. Using letters is easier to
understand for most people.
Permissions:

Owner Group Other


Read
Write
Execute

Numeric mode:
From one to four octal digits
Any omitted digits are assumed to be leading zeros.
The first digit = selects attributes for the set user ID (4) and set group ID (2) and save text image (1)S
The second digit = permissions for the user who owns the file: read (4), write (2), and execute (1)
The third digit = permissions for other users in the file's group: read (4), write (2), and execute (1)
The fourth digit = permissions for other users NOT in the file's group: read (4), write (2), and execute (1)

The octal (0-7) value is calculated by adding up the values for each digit
User (rwx) = 4+2+1 = 7
Group(rx) = 4+1 = 5
World (rx) = 4+1 = 5
chmode mode = 0755

Examples
chmod 400 file - Read by owner
chmod 040 file - Read by group
chmod 004 file - Read by world

chmod 200 file - Write by owner


chmod 020 file - Write by group
chmod 002 file - Write by world

chmod 100 file - execute by owner


chmod 010 file - execute by group
chmod 001 file - execute by world

To combine these, just add the numbers together:


chmod 444 file - Allow read permission to owner and group and world
chmod 777 file - Allow everyone to read, write, and execute file

Symbolic Mode
The format of a symbolic mode is a combination of the letters +-= rwxXstugoa
Multiple symbolic operations can be given, separated by commas.
The full syntax is [ugoa...][[+-=][rwxXstugo...]...][,...] but this is explained below.

A combination of the letters ugoa controls which users' access to the file will be changed:

User letter
The user who owns it u
Other users in the file's Group g
Other users not in the file's group o
All users a

If none of these are given, the effect is as if was given, but bits that are set in the umask are not affected.
All users a is effectively user + group + others

The operator '+' causes the permissions selected to be added to the existing permissions of each file; '-'
causes them to be removed; and '=' causes them to be the only permissions that the file has.

The letters 'rwxXstugo' select the new permissions for the affected users:

Permission letter
Read r
Write w
Execute (or access for directories) x
Execute only if the file is a directory
(or already has execute permission for some user) X
Set user or group ID on execution s
Save program text on swap device t

The permissions that the User who owns the file


currently has for it u
The permissions that other users in the file's
Group have for it g
Permissions that Other users not in the file's
group have for it o

Examples
Deny execute permission to everyone:
chmod a-x file

Allow read permission to everyone:


chmod a+r file

Make a file readable and writable by the group and others:


chmod go+rw file

Make a shell script executable by the user/owner


$ chmod u+x myscript.sh

Allow everyone to read, write, and execute the file and turn on the set group-ID:
chmod =rwx,g+s file

Notes:
When chmod is applied to a directory:
read = list files in the directory
write = add new files to the directory
execute = access files in the directory

chmod never changes the permissions of symbolic links. This is not a problem since the permissions of
symbolic links are never used. However, for each symbolic link listed on the command line, chmod
changes the permissions of the pointed-to file. In contrast, chmod ignores symbolic links encountered
during recursive directory traversals

Visit chmod calculator


http://www.onlineconversion.com/html_chmod_calculator.htm
File Display Commands

cat - concatenate a file


Display the contents of a file with the concatenate command, cat.

Syntax
cat [options] [file]

Common Options
-n precede each line with a line number
-v display non-printing characters, except tabs, new-lines, and form-feeds
-e display $ at the end of each line (prior to new-line) (when used with -v option)

Examples
% cat filename

You can list a series of files on the command line, and cat will concatenate them, starting each in turn,
immediately after completing the previous one, e.g.:
% cat file1 file2 file3

more, less, and pg - page through a file


more, less, and pg let you page through the contents of a file one screenful at a time. These may not
all be available on your Linux system. They allow you to back up through the previous pages and
search for words, etc.

Syntax
more [options] [+/pattern] [filename]
less [options] [+/pattern] [filename]
pg [options] [+/pattern] [filename]

Options
more less pg Action
-c -c -c clear display before displaying
-i ignore case
-w default default don’t exit at end of input, but prompt and wait
-lines -lines # of lines/screenful
+/pattern +/pattern +/pattern search for the pattern

Internal Controls
more displays (one screen at a time) the file requested
<space bar> to view next screen
<return> or <CR> to view one more line
q to quit viewing the file
h help
b go back up one screenful
/word search for word in the remainder of the file
See the man page for additional options
less similar to more; see the man page for options
pg the SVR4 equivalent of more (page)

-------------------------------------------------------------------------------

echo - echo a statement


The echo command is used to repeat, or echo, the argument you give it back to the standard output
device. It normally ends with a line-feed, but you can specify an option to prevent this.

Syntax
echo [string]

Common Options
-n don’t print <new-line> (BSD, shell built-in)
\c don’t print <new-line> (SVR4)
\0n where n is the 8-bit ASCII character code (SVR4)
\t tab (SVR4)
\f form-feed (SVR4)
\n new-line (SVR4)
\v vertical tab (SVR4)

Examples
% echo Hello Class or echo "Hello Class"
To prevent the line feed:
% echo -n Hello Class or echo "Hello Class \c"
where the style to use in the last example depends on the echo command in use.
The \x options must be within pairs of single or double quotes, with or without other string characters.

-------------------------------------------------------------------------------

head - display the start of a file


head displays the head, or start, of the file.

Syntax
head [options] file

Common Options
-n number number of lines to display, counting from the top of the file
-number same as above

Examples
By default head displays the first 10 lines. You can display more with the "-n number", or
"-number" options, e.g., to display the first 40 lines:
% head -40 filename or head -n 40 filename

-------------------------------------------------------------------------------

more
Browses/displays files one screen at a time.

 Use h for help


 spacebar to page
 b for back
 q to quit
 /string to search for string

Example:
more sample.f

-------------------------------------------------------------------------------

tail - display the end of a file


tail displays the tail, or end, of the file.

Syntax
tail [options] file

Common Options
-number number of lines to display, counting from the bottom of the file

Examples
The default is to display the last 10 lines, but you can specify different line or byte numbers, or a
different starting point within the file. To display the last 30 lines of a file use the -number style:
% tail -30 filename
Filter / Text Processing Commands

grep, awk, sed

grep
The grep utility is used to search for generalized regular expressions occurring in Linux files. Regular
expressions, such as those shown above, are best specified in apostrophes (or single quotes) when
specified in the grep utility. The egrep utility provides searching capability using an extended set of
meta-characters. The syntax of the grep utility, some of the available options, and a few examples are
shown below.

Syntax
grep [options] regexp [file[s]]
Common Options
-i ignore case
-c report only a count of the number of lines containing matches, not the matches
themselves
-v invert the search, displaying only lines that do not match
-n display the line number along with the line on which a match was found
-s work silently, reporting only the final status:
0, for match(es) found
1, for no matches
2, for errors
-l list filenames, but not lines, in which matches were found

Examples
Consider the following file:
cat num.list
1 15 fifteen
2 14 fourteen
3 13 thirteen
4 12 twelve
5 11 eleven
6 10 ten
8 8 eight
9 7 seven
10 6 six
11 5 five
14 2 two
15 1 one
Here are some grep examples using this file. In the first we’ll search for the number 15:
> grep '15' num.list
1 15 fifteen
15 1 one

Now we’ll use the "-c" option to count the number of lines matching the search criterion:
> grep -c '15' num.list
2
Here we’ll be a little more general in our search, selecting for all lines containing the character 1
followed by either of 1, 2 or 5:
> grep '1[125]' num.list
1 15 fifteen
4 12 twelve
5 11 eleven
11 5 five
12 4 four
15 1 one

Now we’ll search for all lines that begin with a space:
> grep '^ ' num.list
1 15 fifteen
2 14 fourteen
3 13 thirteen
4 12 twelve
5 11 eleven
6 10 ten
7 9 nine
8 8 eight
9 7 seven

Or all lines that don’t begin with a space:


> grep '^[^ ]' num.list
10 6 six
11 5 five
12 4 four
13 3 three
14 2 two
15 1 one

The latter could also be done by using the -v option with the original search string, e.g.:
> grep -v '^ ' num.list
10 6 six
11 5 five
12 4 four
13 3 three
14 2 two
15 1 one

Here we search for all lines that begin with the characters 1 through 9:
> grep '^[1-9]' num.list
10 6 six
11 5 five
12 4 four
13 3 three
14 2 two
15 1 one

This example will search for any instances of t followed by zero or more occurrences of e:
> grep 'te*' num.list
1 15 fifteen
2 14 fourteen
3 13 thirteen
4 12 twelve
6 10 ten
8 8 eight
13 3 three
14 2 two

This example will search for any instances of t followed by one or more occurrences of e:
> grep 'tee*' num.list
1 15 fifteen
2 14 fourteen
3 13 thirteen
6 10 ten

We can also take our input from a program, rather than a file. Here we report on any lines output by
the who program that begin with the letter l.
> who | grep '^l'
lcondron ttyp0 Dec 1 02:41 (lcondron-pc.acs.)
sed

The non-interactive, stream editor, sed, edits the input stream, line by line, making the specified
changes, and sends the result to standard output.

Syntax
sed [options] edit_command [file]
The format for the editing commands are:
[address1[,address2]][function][arguments]

where the addresses are optional and can be separated from the function by spaces or tabs. The
function is required. The arguments may be optional or required, depending on the function in use.

Line-number Addresses are decimal line numbers, starting from the first input line and incremented
by one for each. If multiple input files are given the counter continues cumulatively through the files.
The last input line can be specified with the "$" character.

Context Addresses are the regular expression patterns enclosed in slashes (/).

Commands can have 0, 1, or 2 comma-separated addresses with the following affects:


# of addresses lines affected
0 every line of input
1 only lines matching the address
2 first line matching the first address and all lines until, and including, the
line matching the second address. The process is then repeated on
subsequent lines.
Substitution functions allow context searches and are specified in the form:
s/regular_expression_pattern/replacement_string/flag

and should be quoted with single quotes (’) if additional options or functions are specified. These
patterns are identical to context addresses, except that while they are normally enclosed in slashes (/),
any normal character is allowed to function as the delimiter, other than <space> and <newline>.
The replacement string is not a regular expression pattern; characters do not have special meanings
here, except:

& substitute the string specified by regular_expression_pattern


\n substitute the nth string matched by regular_expression_pattern
enclosed in ’\(’, ’\)’ pairs.

These special characters can be escaped with a backslash (\) to remove their special meaning
Common Options
-e script edit script
-n don’t print the default output, but only those lines specified by p or s///p functions
-f script_file take the edit scripts from the file, script_file

Valid flags on the substitution functions include:


d delete the pattern
g globally substitute the pattern
p print the line

Examples
This example changes all incidents of a comma (,) into a comma followed by a space (, ) when doing
output:
% cat filey | sed s/,/,\ /g

The following example removes all incidents of Jr preceded by a space ( Jr) in filey:
% cat filey | sed s/\ Jr//g

To perform multiple operations on the input precede each operation with the -e (edit) option and
quote the strings. For example, to filter for lines containing "Date: " and "From: " and replace these
without the colon (:), try:
sed -e ’s/Date: /Date /’ -e ’s/From: /From /’

To print only those lines of the file from the one beginning with "Date:" up to, and including, the one
beginning with "Name:" try:
sed -n ’/^Date:/,/^Name:/p’

To print only the first 10 lines of the input (a replacement for head):
sed -n 1,10p

awk, nawk, gawk


awk is a pattern scanning and processing language. Its name comes from the last initials of the three
authors: Alfred. V. Aho, Brian. W. Kernighan, and Peter. J. Weinberger. nawk is new awk, a newer
version of the program, and gawk is gnu awk, from the Free Software Foundation. Each version is a
little different. Here we’ll confine ourselves to simple examples which should be the same for all
versions. On some OSs awk is really nawk.

awk searches its input for patterns and performs the specified operation on each line, or fields of the
line, that contain those patterns. You can specify the pattern matching statements for awk either on
the command line, or by putting them in a file and using the -f program_file option.

Syntax
awk program [file]
where program is composed of one or more:
pattern { action }

fields. Each input line is checked for a pattern match with the indicated action being taken on a
match. This continues through the full sequence of patterns, then the next line of input is checked.

Input is divided into records and fields. The default record separator is <newline>, and the variable
NR keeps the record count. The default field separator is whitespace, spaces and tabs, and the
variable NF keeps the field count. Input field, FS, and record, RS, separators can be set at any time to
match any single character. Output field, OFS, and record, ORS, separators can also be changed to
any single character, as desired. $n, where n is an integer, is used to represent the nth field of the
input record, while $0 represents the entire input record.

BEGIN and END are special patterns matching the beginning of input, before the first field is read,
and the end of input, after the last field is read, respectively.

Printing is allowed through the print, and formatted print, printf, statements.

Patterns may be regular expressions, arithmetic relational expressions, string-valued expressions,


and boolean combinations of any of these. For the latter the patterns can be combined with the
boolean operators below, using parentheses to define the combination:
|| or
&& and
! not

Comma separated patterns define the range for which the pattern is applicable, e.g.:
/first/,/last/

selects all lines starting with the one containing first, and continuing inclusively, through the one
containing last.

To select lines 15 through 20 use the pattern range:


NR == 15, NR == 20

Regular expressions must be enclosed with slashes (/) and meta-characters can be escaped with the
backslash (\). Regular expressions can be grouped with the operators:
| or, to separate alternatives
+ one or more
? zero or one

A regular expression match can be either of:


~ contains the expression
!~ does not contain the expression

So the program:
$1 ~ /[Ff]rank/

is true if the first field, $1, contains "Frank" or "frank" anywhere within the field. To match a field
identical to "Frank" or "frank" use:
$1 ~ /^[Ff]rank$/

Relational expressions are allowed using the relational operators:


< less than
<= less than or equal to
== equal to
>= greater than or equal to
!= not equal to
> greater than

Offhand you don’t know if variables are strings or numbers. If neither operand is known to be
numeric, than string comparisons are performed. Otherwise, a numeric comparison is done. In the
absence of any information to the contrary, a string comparison is done, so that:
$1 > $2
will compare the string values. To ensure a numerical comparison do something similar to:
( $1 + 0 ) > $2
The mathematical functions: exp, log and sqrt are built-in

Some other built-in functions include:


index(s,t) returns the position of string s where t first occurs, or 0 if it doesn’t
length(s) returns the length of string s
substr(s,m,n) returns the n-character substring of s, beginning at position m

Arrays are declared automatically when they are used, e.g.:


arr[i] = $1
assigns the first field of the current input record to the ith element of the array.

Flow control statements using if-else, while, and for are allowed with C type syntax:
for (i=1; i <= NF; i++) {actions}
while (i<=NF) {actions}
if (i<NF) {actions}

Common Options
-f program_file read the commands from program_file
-Fc use character c as the field separator character
Examples
% cat filex | tr a-z A-Z | awk -F: '{printf ("7R %-6s %-9s %-24s \n",$1,$2,$3)}'>upload.file

cats filex, which is formatted as follows:


nfb791:99999999:smith
7ax791:999999999:jones
8ab792:99999999:chen
8aa791:999999999:mcnulty
changes all lower case characters to upper case with the tr utility, and formats the file into the
following which is written into the file upload.file:
7R NFB791 99999999 SMITH
7R 7AX791 999999999 JONES
7R 8AB792 99999999 CHEN
7R 8AA791 999999999 MCNULTY

cut - select parts of a line

The cut command allows a portion of a file to be extracted for another use.
Syntax

cut [options] file

Common Options
-c character_list character positions to select (first character is 1)
-d delimiter field delimiter (defaults to <TAB>)
-f field_list fields to select (first field is 1)
Both the character and field lists may contain comma-separated or blank-character-separated
numbers (in increasing order), and may contain a hyphen (-) to indicate a range. Any numbers
missing at either before (e.g. -5) or after (e.g. 5-) the hyphen indicates the full range starting with the first,
or ending with the last character or field, respectively. Blank-character-separated lists must be enclosed in
quotes. The field delimiter should be enclosed in quotes if it has special meaning to the shell, e.g. when
specifying a <space> or <TAB> character.

Examples
In these examples we will use the file users:

jdoe John Doe 4/15/96


lsmith Laura Smith 3/12/96
pchen Paul Chen 1/5/96
jhsu Jake Hsu 4/17/96
sphilip Sue Phillip 4/2/96

If you only wanted the username and the user's real name, the cut command could be used to get only
that information:

% cut -f 1,2 users


jdoe John Doe
lsmith Laura Smith
pchen Paul Chen
jhsu Jake Hsu
sphilip Sue Phillip

The cut command can also be used with other options. The -c option allows characters to be the
selected cut. To select the first 4 characters:

% cut -c 1-4 users


This yields:
jdoe
lsmi
pche
jhsu
sphi
thus cutting out only the first 4 characters of each line.

paste - merge files

The paste command allows two files to be combined side-by-side. The default delimiter between the
columns in a paste is a tab, but options allow other delimiters to be used.

Syntax
paste [options] file1 file2

Common Options
-d list list of delimiting characters
-s concatenate lines
The list of delimiters may include a single character such as a comma; a quoted string, such as a
space; or any of the following escape sequences:
\n <newline> character
\t <tab> character
\\ backslash character
\0 empty string (non-null character)

It may be necessary to quote delimiters with special meaning to the shell.


A hyphen (-) in place of a file name is used to indicate that field should come from standard input.

Examples
Given the file users:
jdoe John Doe 4/15/96
lsmith Laura Smith 3/12/96
pchen Paul Chen 1/5/96
jhsu Jake Hsu 4/17/96
sphilip Sue Phillip 4/2/96
and the file phone:
John Doe 555-6634
Laura Smith 555-3382
Paul Chen 555-0987
Jake Hsu 555-1235
Sue Phillip 555-7623

the paste command can be used in conjunction with the cut command to create a new file, listing, that
includes the username, real name, last login, and phone number of all the users. First, extract the phone
numbers into a temporary file, temp.file:
% cut -f2 phone > temp.file
555-6634
555-3382
555-0987
555-1235
555-7623
The result can then be pasted to the end of each line in users and directed to the new file, listing:
% paste users temp.file > listing
jdoe John Doe 4/15/96 237-6634
lsmith Laura Smith 3/12/96 878-3382
pchen Paul Chen 1/5/96 888-0987
jhsu Jake Hsu 4/17/96 545-1235
sphilip Sue Phillip 4/2/96 656-7623

This could also have been done on one line without the temporary file as:
% cut -f2 phone | paste users - > listing

with the same results. In this case the hyphen (-) is acting as a placeholder for an input field (namely,
the output of the cut command).

sort - sort file contents

The sort command is used to order the lines of a file. Various options can be used to choose the order as
well as the field on which a file is sorted. Without any options, the sort compares entire lines in the file
and outputs them in ASCII order (numbers first, upper case letters, then lower case letters).

Syntax
sort [options] [+pos1 [ -pos2 ]] file

Common Options
-b ignore leading blanks (<space> & <tab>) when determining starting and
ending characters for the sort key
-d dictionary order, only letters, digits, <space> and <tab> are significant
-f fold upper case to lower case
-k keydef sort on the defined keys (not available on all systems)
-i ignore non-printable characters
-n numeric sort
-o outfile output file
-r reverse the sort
-t char use char as the field separator character
-u unique; omit multiple copies of the same line (after the sort)
+pos1 [-pos2] (old style) provides functionality similar to the "-k keydef" option.

For the +/-position entries pos1 is the starting word number, beginning with 0 and pos2 is the ending
word number. When -pos2 is omitted the sort field continues through the end of the line. Both pos1 and
pos2 can be written in the form w.c, where w is the word number and c is the character within the word.
For c 0 specifies the delimiter preceding the first character, and 1 is the first character of the word. These
entries can be followed by type modifiers, e.g. n for numeric, b to skip blanks, etc.

The keydef field of the "-k" option has the syntax:


start_field [type] [ ,end_field [type] ]

where:
start_field, end_field define the keys to restrict the sort to a portion of the line
type modifies the sort, valid modifiers are given the single characters (bdfiMnr)
from the similar sort options, e.g. a type b is equivalent to "-b", but applies
only to the specified field

Examples
In the file users:
jdoe John Doe 4/15/96
lsmith Laura Smith 3/12/96
pchen Paul Chen 1/5/96
jhsu Jake Hsu 4/17/96
sphilip Sue Phillip 4/2/96
sort users yields the following:
jdoe John Doe 4/15/96
jhsu Jake Hsu 4/17/96
lsmith Laura Smith 3/12/96
pchen Paul Chen 1/5/96
sphilip Sue Phillip 4/2/96

If, however, a listing sorted by last name is desired, use the option to specify which field to sort on (fields
are numbered starting at 0):
% sort +2 users:
pchen Paul Chen 1/5/96
jdoe John Doe 4/15/96
jhsu Jake Hsu 4/17/96
sphilip Sue Phillip 4/2/96
lsmith Laura Smith 3/12/96

To sort in reverse order:


% sort -r users:
sphilip Sue Phillip 4/2/96
pchen Paul Chen 1/5/96
lsmith Laura Smith 3/12/96
jhsu Jake Hsu 4/17/96
jdoe John Doe 4/15/96

A particularly useful sort option is the -u option, which eliminates any duplicate entries in a file while
ordering the file. For example, the file todays.logins:

sphillip
jchen
jdoe
lkeres
jmarsch
ageorge
lkeres
proy
jchen

shows a listing of each username that logged into the system today. If we want to know how many
unique users logged into the system today, using sort with the -u option will list each user only once.
(The command can then be piped into "wc -l" to get a number):

% sort -u todays.logins
ageorge
jchen
jdoe
jmarsch
lkeres
proy
sphillip

uniq - remove duplicate lines

uniq filters duplicate adjacent lines from a file.

Syntax
uniq [options] [+|-n] file [file.new]

Common Options
-d one copy of only the repeated lines
-u select only the lines not repeated
+n ignore the first n characters
-s n same as above (SVR4 only)
-n skip the first n fields, including any blanks (<space> & <tab>)
-f fields same as above (SVR4 only)

Examples
Consider the following file and example, in which uniq removes the 4th line from file and places the
result in a file called file.new.

$ cat file
1 2 3 6
4 5 3 6
7 8 9 0
7 8 9 0

$ uniq file file.new

$ cat file.new
1 2 3 6
4 5 3 6
7 8 9 0

Below, the -n option of the uniq command is used to skip the first 2 fields in file, and filter out lines
which are duplicates from the 3rd field onward.

$ uniq -2 file
1 2 3 6
7 8 9 0

tee - copy command output


tee sends standard in to specified files and also to standard out. It’s often used in command pipelines.

Syntax
tee [options] [file[s]]
Common Options
-a append the output to the files
-i ignore interrupts
Examples
In this first example the output of who is displayed on the screen and stored in the file users.file:
> who | tee users.file
condron ttyp0 Apr 22 14:10 (lcondron-pc.acs.)
frank ttyp1 Apr 22 16:19 (nyssa)
condron ttyp9 Apr 22 15:52 (lcondron-mac.acs)

> cat users.file


Condron ttyp0 Apr 22 14:10 (lcondron-pc.acs.)
Frank ttyp1 Apr 22 16:19 (nyssa)
Condron ttyp9 Apr 22 15:52 (lcondron-mac.acs)

In this next example the output of who is sent to the files users.a and users.b. It is also piped to the
wc command, which reports the line count.
> who | tee users.a users.b | wc -l
3

> cat users.a


condron ttyp0 Apr 22 14:10 (lcondron-pc.acs.)
frank ttyp1 Apr 22 16:19 (nyssa)
condron ttyp9 Apr 22 15:52 (lcondron-mac.acs)

> cat users.b


condron ttyp0 Apr 22 14:10 (lcondron-pc.acs.)
frank ttyp1 Apr 22 16:19 (nyssa)
condron ttyp9 Apr 22 15:52 (lcondron-mac.acs)

In the following example a long directory listing is sent to the file files.long. It is also piped to the
grep command which reports which files were last modified in August.
> ls -l | tee files.long |grep Aug
1 drwxr-sr-x 2 condron 512 Aug 8 1995 News/
2 -rw-r--r-- 1 condron 1076 Aug 8 1995 magnus.cshrc
2 -rw-r--r-- 1 condron 1252 Aug 8 1995 magnus.login

> cat files.long


total 34
2 -rw-r--r-- 1 condron 1253 Oct 10 1995 #.login#
1 drwx------ 2 condron 512 Oct 17 1995 Mail/
1 drwxr-sr-x 2 condron 512 Aug 8 1995 News/
5 -rw-r--r-- 1 condron 4299 Apr 21 00:18 editors.txt
2 -rw-r--r-- 1 condron 1076 Aug 8 1995 magnus.cshrc
2 -rw-r--r-- 1 condron 1252 Aug 8 1995 magnus.login
7 -rw-r--r-- 1 condron 6436 Apr 21 23:50 resources.txt
4 -rw-r--r-- 1 condron 3094 Apr 18 18:24 telnet.ftp
1 drwxr-sr-x 2 condron 512 Apr 21 23:56 uc/
1 -rw-r--r-- 1 condron 1002 Apr 22 00:14 uniq.tee.txt
1 -rw-r--r-- 1 condron 1001 Apr 20 15:05 uniq.tee.txt~
7 -rw-r--r-- 1 condron 6194 Apr 15 20:18 Linuxgrep.txt
Finding System Information:

uname –a
cat /etc/redhat-release
dmidecode

uname:
Sometimes it is required to quickly determine details like kernel name, version, hostname, etc of the
Linux box you are using.

Even though you can find all these details in respective files present under the proc filesystem, it is easier
to use uname utility to get these information quickly.

The basic syntax of the uname command is:

uname [OPTION]...

Now lets look at some examples that demonstrate the usage of ‘uname’ command.
uname without any option

When the ‘uname’ command is run without any option then it prints just the kernel name. So the output
below shows that its the ‘Linux’ kernel that is used by this system.

$ uname
Linux

You can also use uname -s, which also displays the kernel name.

$ uname -s
Linux

Get the network node host name using -n option

Use uname -n option to fetch the network node host name of your Linux box.

$ uname -n
dev-server

The output above will be the same as the output of the hostname command.
Get kernel release using -r option
uname command can also be used to fetch the kernel release information. The option -r can be used for
this purpose.

$ uname -r
2.6.32-100.28.5.el6.x86_64

Get the kernel version using -v option

uname command can also be used to fetch the kernel version information. The option -v can be used for
this purpose.

$ uname -v
#1 SMP Wed Feb 2 18:40:23 EST 2011

Get the machine hardware name using -m option

uname command can also be used to fetch the machine hardware name. The option -m can be used for
this purpose. This indicates that it is a 64-bit system.

$ uname -m
x86_64

Get the processor type using -p option

uname command can also be used to fetch the processor type information. The option -p can be used for
this purpose. If the uname command is not able to fetch the processor type information then it produces
‘unknown’ in the output.

$ uname -p
x86_64

Sometimes you might see ‘unknown’ as the output of this command, if uname was not able to fetch the
information on processor type.
Get the hardware platform using -i option

uname command can also be used to fetch the hardware platform information. The option -i can be used
for this purpose. If the uname command is not able to fetch the hardware platform information then it
produces ‘unknown’ in the output.

$ uname -i
x86_64

Sometimes you might see ‘unknown’ as the output of this command, if uname was not able to fetch the
information about the platform.
Get the operating system name using the -o option

uname command can also be used to fetch the operating system name. The option -o can be used for this
purpose.

For example :

$ uname -o
GNU/Linux

cat /etc/redhat-release:
 This file provides information about your system distribution and its version
 You can also run /etc/*rel* for systems that are not on CentOS or Redhat

Dmidecode:

dmidecode is a tool for dumping a computer's DMI (some say SMBIOS) table contents in a human-
readable format. This table contains a description of the system's hardware components, as well as other
useful pieces of information such as serial numbers and BIOS revision. Thanks to this table, you can
retrieve this information without having to probe for the actual hardware.

Take a look at

man dmidecode

to find out all options. The most common option is the --type switch which takes one or more of the
following keywords:

bios, system, baseboard, chassis, processor, memory, cache, connector, slot

You can as well specify one or more of the following numbers:

Type Information
----------------------------------------
0 BIOS
1 System
2 Base Board
3 Chassis
4 Processor
5 Memory Controller
6 Memory Module
7 Cache
8 Port Connector
9 System Slots
10 On Board Devices
11 OEM Strings
12 System Configuration Options
13 BIOS Language
14 Group Associations
15 System Event Log
16 Physical Memory Array
17 Memory Device
18 32-bit Memory Error
19 Memory Array Mapped Address
20 Memory Device Mapped Address
21 Built-in Pointing Device
22 Portable Battery
23 System Reset
24 Hardware Security
25 System Power Controls
26 Voltage Probe
27 Cooling Device
28 Temperature Probe
29 Electrical Current Probe
30 Out-of-band Remote Access
31 Boot Integrity Services
32 System Boot
33 64-bit Memory Error
34 Management Device
35 Management Device Component
36 Management Device Threshold Data
37 Memory Channel
38 IPMI Device
39 Power Supply

Each keyword is equivalent to a list of type numbers:

Keyword Types
------------------------------
bios 0, 13
system 1, 12, 15, 23, 32
baseboard 2, 10
chassis 3
processor 4
memory 5, 6, 16, 17
cache 7
connector 8
slot 9

Here are a few sample outputs from one of my servers:

dmidecode --type bios

server1:/home/admin# dmidecode --type bios


# dmidecode 2.8
SMBIOS 2.5 present.

Handle 0x0000, DMI type 0, 24 bytes


BIOS Information
Vendor: American Megatrends Inc.
Version: V1.5B2
Release Date: 10/31/2007
Address: 0xF0000
Runtime Size: 64 kB
ROM Size: 1024 kB
Characteristics:
ISA is supported
PCI is supported
PNP is supported
APM is supported
BIOS is upgradeable
BIOS shadowing is allowed
ESCD support is available
Boot from CD is supported
Selectable boot is supported
BIOS ROM is socketed
EDD is supported
5.25"/1.2 MB floppy services are supported (int 13h)
3.5"/720 KB floppy services are supported (int 13h)
3.5"/2.88 MB floppy services are supported (int 13h)
Print screen service is supported (int 5h)
8042 keyboard services are supported (int 9h)
Serial services are supported (int 14h)
Printer services are supported (int 17h)
CGA/mono video services are supported (int 10h)
ACPI is supported
USB legacy is supported
LS-120 boot is supported
ATAPI Zip drive boot is supported
BIOS boot specification is supported
Targeted content distribution is supported
BIOS Revision: 8.14

Handle 0x0028, DMI type 13, 22 bytes


BIOS Language Information
Installable Languages: 1
en|US|iso8859-1
Currently Installed Language: en|US|iso8859-1

server1:/home/admin#

dmidecode --type system

server1:/home/admin# dmidecode --type system


# dmidecode 2.8
SMBIOS 2.5 present.

Handle 0x0001, DMI type 1, 27 bytes


System Information
Manufacturer: MICRO-STAR INTERANTIONAL CO.,LTD
Product Name: MS-7368
Version: 1.0
Serial Number: To Be Filled By O.E.M.
UUID: Not Present
Wake-up Type: Power Switch
SKU Number: To Be Filled By O.E.M.
Family: To Be Filled By O.E.M.

Handle 0x0027, DMI type 12, 5 bytes


System Configuration Options
Option 1: To Be Filled By O.E.M.

server1:/home/admin#

dmidecode --type baseboard

server1:/home/admin# dmidecode --type baseboard


# dmidecode 2.8
SMBIOS 2.5 present.

Handle 0x0002, DMI type 2, 15 bytes


Base Board Information
Manufacturer: MICRO-STAR INTERANTIONAL CO.,LTD
Product Name: MS-7368
Version: 1.0
Serial Number: To be filled by O.E.M.
Asset Tag: To Be Filled By O.E.M.
Features:
Board is a hosting board
Board is replaceable
Location In Chassis: To Be Filled By O.E.M.
Chassis Handle: 0x0003
Type: Motherboard
Contained Object Handles: 0

Handle 0x0025, DMI type 10, 6 bytes


On Board Device Information
Type: Video
Status: Enabled
Description: To Be Filled By O.E.M.

server1:/home/admin#

dmidecode --type chassis

server1:/home/admin# dmidecode --type chassis


# dmidecode 2.8
SMBIOS 2.5 present.

Handle 0x0003, DMI type 3, 21 bytes


Chassis Information
Manufacturer: To Be Filled By O.E.M.
Type: Desktop
Lock: Not Present
Version: To Be Filled By O.E.M.
Serial Number: To Be Filled By O.E.M.
Asset Tag: To Be Filled By O.E.M.
Boot-up State: Safe
Power Supply State: Safe
Thermal State: Safe
Security Status: None
OEM Information: 0x00000000
Heigth: Unspecified
Number Of Power Cords: 1
Contained Elements: 0

server1:/home/admin#

dmidecode --type processor

server1:/home/admin# dmidecode --type processor


# dmidecode 2.8
SMBIOS 2.5 present.

Handle 0x0004, DMI type 4, 40 bytes


Processor Information
Socket Designation: CPU 1
Type: Central Processor
Family: Other
Manufacturer: AMD
ID: B2 0F 06 00 FF FB 8B 17
Version: AMD Athlon(tm) 64 X2 Dual Core Processor 5600+
Voltage: 1.5 V
External Clock: 200 MHz
Max Speed: 2800 MHz
Current Speed: 2900 MHz
Status: Populated, Enabled
Upgrade: Other
L1 Cache Handle: 0x0005
L2 Cache Handle: 0x0006
L3 Cache Handle: 0x0007
Serial Number: To Be Filled By O.E.M.
Asset Tag: To Be Filled By O.E.M.
Part Number: To Be Filled By O.E.M.

server1:/home/admin#

dmidecode --type memory


server1:/home/admin# dmidecode --type memory
# dmidecode 2.8
SMBIOS 2.5 present.

Handle 0x0008, DMI type 5, 20 bytes


Memory Controller Information
Error Detecting Method: 64-bit ECC
Error Correcting Capabilities:
None
Supported Interleave: One-way Interleave
Current Interleave: One-way Interleave
Maximum Memory Module Size: 512 MB
Maximum Total Memory Size: 1024 MB
Supported Speeds:
70 ns
60 ns
Supported Memory Types:
SIMM
DIMM
SDRAM
Memory Module Voltage: 3.3 V
Associated Memory Slots: 2
0x0009
0x000A
Enabled Error Correcting Capabilities:
None

Handle 0x0009, DMI type 6, 12 bytes


Memory Module Information
Socket Designation: DIMM0
Bank Connections: 0 5
Current Speed: 161 ns
Type: ECC DIMM
Installed Size: 1024 MB (Double-bank Connection)
Enabled Size: 1024 MB (Double-bank Connection)
Error Status: OK

Handle 0x000A, DMI type 6, 12 bytes


Memory Module Information
Socket Designation: DIMM1
Bank Connections: 0 5
Current Speed: 163 ns
Type: ECC DIMM
Installed Size: 1024 MB (Double-bank Connection)
Enabled Size: 1024 MB (Double-bank Connection)
Error Status: OK

Handle 0x0029, DMI type 16, 15 bytes


Physical Memory Array
Location: System Board Or Motherboard
Use: System Memory
Error Correction Type: None
Maximum Capacity: 8 GB
Error Information Handle: Not Provided
Number Of Devices: 2

Handle 0x002B, DMI type 17, 27 bytes


Memory Device
Array Handle: 0x0029
Error Information Handle: Not Provided
Total Width: 64 bits
Data Width: 72 bits
Size: 1024 MB
Form Factor: DIMM
Set: None
Locator: DIMM0
Bank Locator: BANK0
Type: DDR2
Type Detail: Synchronous
Speed: 333 MHz (3.0 ns)
Manufacturer: Manufacturer0
Serial Number: SerNum0
Asset Tag: AssetTagNum0
Part Number: PartNum0

Handle 0x002D, DMI type 17, 27 bytes


Memory Device
Array Handle: 0x0029
Error Information Handle: Not Provided
Total Width: 64 bits
Data Width: 72 bits
Size: 1024 MB
Form Factor: DIMM
Set: None
Locator: DIMM1
Bank Locator: BANK1
Type: DDR2
Type Detail: Synchronous
Speed: 333 MHz (3.0 ns)
Manufacturer: Manufacturer1
Serial Number: SerNum1
Asset Tag: AssetTagNum1
Part Number: PartNum1

server1:/home/admin#

dmidecode --type cache

server1:/home/admin# dmidecode --type cache


# dmidecode 2.8
SMBIOS 2.5 present.

Handle 0x0005, DMI type 7, 19 bytes


Cache Information
Socket Designation: L1-Cache
Configuration: Enabled, Not Socketed, Level 1
Operational Mode: Varies With Memory Address
Location: Internal
Installed Size: 256 KB
Maximum Size: 256 KB
Supported SRAM Types:
Pipeline Burst
Installed SRAM Type: Pipeline Burst
Speed: Unknown
Error Correction Type: Single-bit ECC
System Type: Data
Associativity: 4-way Set-associative

Handle 0x0006, DMI type 7, 19 bytes


Cache Information
Socket Designation: L2-Cache
Configuration: Enabled, Not Socketed, Level 2
Operational Mode: Varies With Memory Address
Location: Internal
Installed Size: 1024 KB
Maximum Size: 1024 KB
Supported SRAM Types:
Pipeline Burst
Installed SRAM Type: Pipeline Burst
Speed: Unknown
Error Correction Type: Single-bit ECC
System Type: Unified
Associativity: 4-way Set-associative

Handle 0x0007, DMI type 7, 19 bytes


Cache Information
Socket Designation: L3-Cache
Configuration: Disabled, Not Socketed, Level 3
Operational Mode: Unknown
Location: Internal
Installed Size: 0 KB
Maximum Size: 0 KB
Supported SRAM Types:
Unknown
Installed SRAM Type: Unknown
Speed: Unknown
Error Correction Type: Unknown
System Type: Unknown
Associativity: Unknown

server1:/home/admin#

dmidecode --type connector

server1:/home/admin# dmidecode --type connector


# dmidecode 2.8
SMBIOS 2.5 present.

Handle 0x000B, DMI type 8, 9 bytes


Port Connector Information
Internal Reference Designator: J1A1
Internal Connector Type: None
External Reference Designator: PS2Mouse
External Connector Type: PS/2
Port Type: Mouse Port

Handle 0x000C, DMI type 8, 9 bytes


Port Connector Information
Internal Reference Designator: J1A1
Internal Connector Type: None
External Reference Designator: Keyboard
External Connector Type: PS/2
Port Type: Keyboard Port

Handle 0x000D, DMI type 8, 9 bytes


Port Connector Information
Internal Reference Designator: J2A2
Internal Connector Type: None
External Reference Designator: USB1
External Connector Type: Access Bus (USB)
Port Type: USB

Handle 0x000E, DMI type 8, 9 bytes


Port Connector Information
Internal Reference Designator: J2A2
Internal Connector Type: None
External Reference Designator: USB2
External Connector Type: Access Bus (USB)
Port Type: USB

Handle 0x000F, DMI type 8, 9 bytes


Port Connector Information
Internal Reference Designator: J4A1
Internal Connector Type: None
External Reference Designator: LPT 1
External Connector Type: DB-25 male
Port Type: Parallel Port ECP/EPP

Handle 0x0010, DMI type 8, 9 bytes


Port Connector Information
Internal Reference Designator: J2A1
Internal Connector Type: None
External Reference Designator: COM A
External Connector Type: DB-9 male
Port Type: Serial Port 16550A Compatible

Handle 0x0011, DMI type 8, 9 bytes


Port Connector Information
Internal Reference Designator: J6A1
Internal Connector Type: None
External Reference Designator: Audio Mic In
External Connector Type: Mini Jack (headphones)
Port Type: Audio Port

Handle 0x0012, DMI type 8, 9 bytes


Port Connector Information
Internal Reference Designator: J6A1
Internal Connector Type: None
External Reference Designator: Audio Line In
External Connector Type: Mini Jack (headphones)
Port Type: Audio Port

Handle 0x0013, DMI type 8, 9 bytes


Port Connector Information
Internal Reference Designator: J6B1 - AUX IN
Internal Connector Type: On Board Sound Input From CD-ROM
External Reference Designator: Not Specified
External Connector Type: None
Port Type: Audio Port

Handle 0x0014, DMI type 8, 9 bytes


Port Connector Information
Internal Reference Designator: J6B2 - CDIN
Internal Connector Type: On Board Sound Input From CD-ROM
External Reference Designator: Not Specified
External Connector Type: None
Port Type: Audio Port

Handle 0x0015, DMI type 8, 9 bytes


Port Connector Information
Internal Reference Designator: J6J2 - PRI IDE
Internal Connector Type: On Board IDE
External Reference Designator: Not Specified
External Connector Type: None
Port Type: Other

Handle 0x0016, DMI type 8, 9 bytes


Port Connector Information
Internal Reference Designator: J6J1 - SEC IDE
Internal Connector Type: On Board IDE
External Reference Designator: Not Specified
External Connector Type: None
Port Type: Other

Handle 0x0017, DMI type 8, 9 bytes


Port Connector Information
Internal Reference Designator: J4J1 - FLOPPY
Internal Connector Type: On Board Floppy
External Reference Designator: Not Specified
External Connector Type: None
Port Type: Other

Handle 0x0018, DMI type 8, 9 bytes


Port Connector Information
Internal Reference Designator: J9H1 - FRONT PNL
Internal Connector Type: 9 Pin Dual Inline (pin 10 cut)
External Reference Designator: Not Specified
External Connector Type: None
Port Type: Other

Handle 0x0019, DMI type 8, 9 bytes


Port Connector Information
Internal Reference Designator: J1B1 - CHASSIS REAR FAN
Internal Connector Type: Other
External Reference Designator: Not Specified
External Connector Type: None
Port Type: Other

Handle 0x001A, DMI type 8, 9 bytes


Port Connector Information
Internal Reference Designator: J2F1 - CPU FAN
Internal Connector Type: Other
External Reference Designator: Not Specified
External Connector Type: None
Port Type: Other

Handle 0x001B, DMI type 8, 9 bytes


Port Connector Information
Internal Reference Designator: J8B4 - FRONT FAN
Internal Connector Type: Other
External Reference Designator: Not Specified
External Connector Type: None
Port Type: Other

Handle 0x001C, DMI type 8, 9 bytes


Port Connector Information
Internal Reference Designator: J9G2 - FNT USB
Internal Connector Type: Other
External Reference Designator: Not Specified
External Connector Type: None
Port Type: Other

Handle 0x001D, DMI type 8, 9 bytes


Port Connector Information
Internal Reference Designator: J6C3 - FP AUD
Internal Connector Type: Other
External Reference Designator: Not Specified
External Connector Type: None
Port Type: Other

Handle 0x001E, DMI type 8, 9 bytes


Port Connector Information
Internal Reference Designator: J9G1 - CONFIG
Internal Connector Type: Other
External Reference Designator: Not Specified
External Connector Type: None
Port Type: Other

Handle 0x001F, DMI type 8, 9 bytes


Port Connector Information
Internal Reference Designator: J8C1 - SCSI LED
Internal Connector Type: Other
External Reference Designator: Not Specified
External Connector Type: None
Port Type: Other

Handle 0x0020, DMI type 8, 9 bytes


Port Connector Information
Internal Reference Designator: J9J2 - INTRUDER
Internal Connector Type: Other
External Reference Designator: Not Specified
External Connector Type: None
Port Type: Other

Handle 0x0021, DMI type 8, 9 bytes


Port Connector Information
Internal Reference Designator: J9G4 - ITP
Internal Connector Type: Other
External Reference Designator: Not Specified
External Connector Type: None
Port Type: Other

Handle 0x0022, DMI type 8, 9 bytes


Port Connector Information
Internal Reference Designator: J2H1 - MAIN POWER
Internal Connector Type: Other
External Reference Designator: Not Specified
External Connector Type: None
Port Type: Other

server1:/home/admin#
dmidecode --type slot

server1:/home/admin# dmidecode --type slot


# dmidecode 2.8
SMBIOS 2.5 present.

Handle 0x0023, DMI type 9, 13 bytes


System Slot Information
Designation: AGP
Type: 32-bit AGP 4x
Current Usage: In Use
Length: Short
ID: 0
Characteristics:
3.3 V is provided
Opening is shared
PME signal is supported

Handle 0x0024, DMI type 9, 13 bytes


System Slot Information
Designation: PCI1
Type: 32-bit PCI
Current Usage: Available
Length: Short
ID: 1
Characteristics:
3.3 V is provided
Opening is shared
PME signal is supported
Linux Permissions Cheat Sheet

I created this repository in hopes that it may be used as a helpful reference.

Permissions

Permissions on Unix and other systems like it are split into three classes:

 User
 Group
 Other

Files and directories are owned by a user.

Files and directories are also assigned to a group.

If a user is not the owner, nor a member of the group, then they are classified as other.

Changing permissions

In order to change permissions, we need to first understand the two notations of permissions.

1. Symbolic notation
2. Octal notation

Symbolic notation

Symbolic notation is what you'd see on the left-hand side if you ran a command like ls -l in a terminal.
The first character in symbolic notation indicates the file type and isn't related to permissions in any way. The
remaining characters are in sets of three, each representing a class of permissions.

The first class is the user class. The second class is the group class. The third class is the other class.

Each of the three characters for a class represents the read, write and execute permissions.

 r will be displayed if reading is permitted


 w will be displayed if writing is permitted
 x will be displayed if execution is permitted
 - will be displayed in the place of r, w, and x, if the respective permission is not permitted

Here are some examples of symbolic notation:


 -rwxr--r--: A regular file whose user class has read/write/execute, group class has only read
permissions, other class has only read permissions
 drw-rw-r--: A directory whose user class has read/write permissions, group class has read/write
permissions, other class has only read permissions
 crwxrw-r--: A character special file whose user has read/write/execute permissions, group class has
read/write permissions, other class has only read permissions

Octal notation

Octal (base-8) notation consists of at least 3 digits (sometimes 4, the left-most digit, which represents the setuid
bit, the setgid bit, and the sticky bit).

Each of the three right-most digits are the sum of its component bits in the binary numeral system.

For example:

 The read bit (r in symbolic notation) adds 4 to its total


 The write bit (w in symbolic notation) adds 2 to its total
 The execute bit (x in symbolic notation) adds 1 to its total

So what number would you use if you wanted to set a permission to read and write? 4 + 2 = 6.

Symbolic
Octal notation Plain English
notation

-rwxr--r-- 0744 user class can read/write/execute; group class can read; other class can read

-rw-rw-r-- 0664 user class can read/write; group class can read/write; other class can read

user class can read/write/execute; group class can read/write/execute;


-rwxrwxr-- 0774
other class can read

---------- 0000 None of the classes have permissions

user class can read/write/execute; group class has no permissions;


-rwx------ 0700
other class has no permissions

-rwxrwxrwx 0777 All classes can read/write/execute

-rw-rw-rw 0666 All classes can read/write

-r-xr-xr-x 0555 All classes can read/execute

-r--r--r-- 0444 All classes can read


--wx-wx-wx 0333 All classes can write/execute

--w--w--w- 0222 All classes can write

---x--x--x 0111 All classes can execute

All together now

Let's use the examples from the symbolic notation section and show how it'd convert to octal notation

CHMOD commands

Now that we have a better understanding of permissions and what all of these letters and numbers mean, let's take

Permission
(symbolic CHMOD command Description
nocation)

-rwxrwxrwx chmod 0777 filename; chmod -R 0777 dir All classes can read/write/execute

user can read/write/execute; all others can


-rwxr--r-- chmod 0744 filename; chmod -R 0744 dir
read

-rw-r--r-- chmod 0644 filename; chmod -R 0644 dir user class can read/write; all others can read

-rw-rw-rw- chmod 0666 filename' chmod -R 0666 dir All classes can read/write

a look at how we can use the chmod command in our terminal to change permissions to anything we'd like!
These are just some examples. Using your new-found knowledge, you can set any permissions you'd like! Just be
careful and make sure you don't break your system.
Access Control Lists(ACL) in Linux

What is ACL ?
Access control list (ACL) provides an additional, more flexible permission mechanism for file systems. It
is designed to assist with UNIX file permissions. ACL allows you to give permissions for any user or
group to any disc resource

Use of ACL :
Think of a scenario in which a particular user is not a member of group created by you but still you
want to give some read or write access, how can you do it without making user a member of group,
here comes in picture Access Control Lists, ACL helps us to do this trick.

Basically, ACLs are used to make a flexible permission mechanism in Linux.

setfacl and getfacl are used for setting up ACL and showing ACL respectively.

For example :
getfacl test/seinfeld.txt

Output:
# file: test/seinfeld.txt
# owner: iafzal
# group: iafzal
user::rw-
group::rw-
other::r--

List of commands for setting up ACL :


1) To add permission for a user
setfacl -m "u:user:permissions" /path/to/file

2) To add permissions for a group


setfacl -m "g:group:permissions" /path/to/file

3) To allow all files or directories to inherit ACL entries from the directory it is within
setfacl -dm "entry" /path/to/dir

4) To remove a specific entry


setfacl -x "entry" /path/to/file

5) To remove all entries


setfacl -b path/to/file
For example :
setfacl -m u:iafzal:rwx test/seinfeld.txt

Modifying ACL using setfacl :


To add permissions for a user (user is either the user name or ID):
# setfacl -m "u:user:permissions"

To add permissions for a group (group is either the group name or ID):
# setfacl -m "g:group:permissions"

To allow all files or directories to inherit ACL entries from the directory it is within:
# setfacl -dm "entry"

Example :

setfacl -m u:iafzal:r-x test/seinfeld.txt

setfacl and getfacl

View ACL :
To show permissions :
# getfacl filename

Observe the difference between output of getfacl command before and after setting up ACL permissions
using setfacl command.

Remove ACL :
If you want to remove the set ACL permissions, use setfacl command with -b option.
For example :

remove set permissions


If you compare output of getfacl command before and after using setfacl command with -b option, you
can observe that there is no particular entry for user iafzal in later output.

You can also check if there are any extra permissions set through ACL using ls command.

check set acl with ls

Observe the first command output in image, there is extra “+” sign after the permissions like -rw-rwxr–
+, this indicates there are extra ACL permissions set which you can check by getfacl command
vi Commands

Entering vi

vi filename - The filename can be the name of an


existing file or the name of the file
you want to create.
view filename - Starts vi in "read only" mode. Allows
you to look at a file without the risk
of altering its contents.

Exiting vi

:q - quit - if you have made any changes, vi


will warn you of this, and you'll need
to use one of the other quits.
:w - write edit buffer to disk
:w filename - write edit buffer to disk as filename
:wq - write edit buffer to disk and quit
ZZ - write edit buffer to disk and quit
:q! - quit without writing edit buffer to disk

Positioning within text

By character
left arrow - left one character
right arrow - right one character
backspace - left one character
space - right one character
h - left one character
l - right one character

By word
w - beginning of next word
nw - beginning of nth next word
b - back to previous word
nb - back to nth previous word
e - end of next word
ne - end of nth next word
By line
down arrow - down one line
up arrow - up one line
j - down one line
k - up one line
+ - beginning of next line down
- - beginning of previous line up
0 - first column of current line (zero)
^ - first character of current line
$ - last character of current line

By block
( - beginning of sentence
) - end of sentence
{ - beginning of paragraph
} - end of paragraph

By screen
CTRL-f - forward 1 screen
CTRL-b - backward 1 screen
CTRL-d - down 1/2 screen
CTRL-u - up 1/2 screen
H - top line on screen
M - mid-screen
L - last line on screen

Within file
nG - line n within file
1G - first line in file
G - last line in file

Begin the vi editor exercises

Inserting text

a - append text after cursor *


A - append text at end of line *
i - insert text before cursor *
I - insert text at beginning of line *
o - open a blank line after the current
line for text input *
O - open a blank line before the current
line for text input *

* Note: hit ESC (escape) key when finished inserting!

Continue the vi exercises

Deleting text

x - delete character at cursor


dh - delete character before cursor
nx - delete n characters at cursor
dw - delete next word
db - delete previous word
dnw - delete n words from cursor
dnb - delete n words before cursor
d0 - delete to beginning of line
d$ - delete to end of line
D - delete to end of line
dd - delete current line
d( - delete to beginning of sentence
d) - delete to end of sentence
d{ - delete to beginning of paragraph
d} - delete to end of paragraph
ndd - delete n lines (start at current line)

Changing text

cw - replace word with text *


cc - replace line with text *
c0 - change to beginning of line *
c$ - change to end of line *
C - change to end of line *
c( - change to beginning of sentence *
c) - change to end of sentence *
c{ - change to beginning of paragraph *
c} - change to end of paragraph *
r - overtype only 1 character
R - overtype text until ESC is hit *
J - join two lines

* Note: hit ESC (escape) key when finished changing!


Copying lines

yy - "yank": copy 1 line into buffer


nyy - "yank": copy n lines into buffer
p - put contents of buffer after current
line
P - put contents of buffer before current
line

Moving lines (cutting and pasting)

ndd - delete n lines (placed in buffer)


p - put contents of buffer after current
line
P - put contents of buffer before current
line

Searching / Substituting

/str - search forward for str


?str - search backward for str
n - find next occurrence of current string
N - repeat previous search in reverse
direction

The substitution command requires a line range


specification. If it is omitted, the default
is the current line only. The examples below
show how to specify line ranges.

:s/old/new - substitute new for first occurrence


of old in current line
:s/old/new/g - substitute new for all occurrences
of old in current line
:1,10s/old/new - substitute new for first occurrence
of old in lines 1 - 10
:.,$s/old/new - substitute new for first occurrence
of old in remainder of file
:.,+5s/old/new - substitute new for first occurrence
of old in current line and next 5 lines
:.,-5s/old/new - substitute new for first occurrence
of old in current line and previous
5 lines
:%s/old/new/g - substitute new for all occurrences
of old in the entire file
:%s/old/new/gc - interactively substitute new for all
occurrences of old - will prompt for
y/n response for each substitution.

Miscellaneous commands

u - undo the last command (including undo)


. - repeat last command
xp - swap two adjacent characters
m[a-z] - set a marker (a - z)
'[a-z] - go to a previously set marker (a - z)
:!command - execute specified LINUX command
:r filename - read/insert contents of filename after
current line.
:1,100!fmt - reformat the first 100 lines
:!fmt - reformat the entire file

--------------------------------------------------------------------------------

vi Options

You can change the way vi operates by changing the value of certain options which control
specific parts of the vi environment.

To set an option during a vi session, use one of the commands below as required by the option:

:set option_name
:set option_name=value

Some examples of the more common options are described below.

:set all - shows all vi options in effect

:set ai - set autoindent - automatically indents


each line of text

:set noai - turn autoindent off

:set nu - set line numbering on


:set nonu - turn line numbering off

:set scroll=n - sets number of lines to be scrolled


to n. Used by screen scroll commands.

:set sw=n - set shiftwidth to n. Used by autoindent


option.

:set wm=n - set wrapmargin to n. Specifies number


of spaces to leave on right edge of the
screen before wrapping words to next
line.

:set showmode - reminds you when you are inserting


text.

:set ic - ignore case of characters when


performing a search.

Options can be set permanently by putting them in a file called .exrc in your home directory. A
sample .exrc file appears below. Note that you do not need the colon (:) as part of the option
specification when you put the commands in a .exrc file. Also note that you can put them all on
one line.

set nu ai wm=5 showmode ic


User Account Management:

Following are the basic user account management commands

 useradd
To create a new user in Linux. A different options can be used to modify userId, home directory
etc.

 userdel
This command is used to delete the user. Please note this command alone will not delete the user
home directory. You will have to use option –r to delete user home directory

 groupadd
Creates a new group

 groupdel
Removes an existing group

 usermod
Modify user attributes such as user home directory, user group, user ID etc.

User Files
 /etc/passwd = This file has all user’s attributes
 /etc/shadow = This file contains encrypted user password and password policy
 /etc/group = All group and user group information
Creating User Accounts in Linux:

When we run ‘useradd‘ command in Linux terminal, it performs following major things:

It edits /etc/passwd, /etc/shadow, /etc/group and /etc/gshadow files for the newly created User account.
Creates and populate a home directory for the new user.
Sets permissions and ownerships to home directory.

Basic syntax of command is:

useradd [options] username

In this article we will show you the most used 15 useradd commands with their practical examples in
Linux. We have divided the section into two parts from Basic to Advance usage of command.

Part I: Basic usage with 10 examples


Part II: Advance usage with 5 examples

Part I – 10 Basic Usage of useradd Commands

1. How to Add a New User in Linux

To add/create a new user, all you’ve to follow the command ‘useradd‘ or ‘adduser‘ with ‘username’. The
‘username’ is a user login name, that is used by user to login into the system.

Only one user can be added and that username must be unique (different from other username already
exists on the system).

For example, to add a new user called ‘solider‘, use the following command.

[root@localhost ~]# useradd solider

When we add a new user in Linux with ‘useradd‘ command it gets created in locked state and to unlock
that user account, we need to set a password for that account with ‘passwd‘ command.

[root@localhost ~]# passwd solider


Changing password for user solider.
New LINUX password:
Retype new LINUX password:
passwd: all authentication tokens updated successfully.
Once a new user created, it’s entry automatically added to the ‘/etc/passwd‘ file. The file is used to store
users information and the entry should be.

solider:x:504:504:solider:/home/solider:/bin/bash

The above entry contains a set of seven colon-separated fields, each field has it’s own meaning. Let’s see
what are these fields:

Username: User login name used to login into system. It should be between 1 to 32 charcters long.
Password: User password (or x character) stored in /etc/shadow file in encrypted format.
User ID (UID): Every user must have a User ID (UID) User Identification Number. By default UID 0 is
reserved for root user and UID’s ranging from 1-99 are reserved for other predefined accounts. Further
UID’s ranging from 100-999 are reserved for system accounts and groups.
Group ID (GID): The primary Group ID (GID) Group Identification Number stored in /etc/group file.
User Info: This field is optional and allow you to define extra information about the user. For example,
user full name. This field is filled by ‘finger’ command.
Home Directory: The absolute location of user’s home directory.
Shell: The absolute location of a user’s shell i.e. /bin/bash.

2. Create a User with Different Home Directory

By default ‘useradd‘ command creates a user’s home directory under /home directory with username.
Thus, for example, we’ve seen above the default home directory for the user ‘solider‘ is ‘/home/solider‘.

However, this action can be changed by using ‘-d‘ option along with the location of new home directory
(i.e. /home/newusers). For example, the following command will create a user ‘solider‘ with a home
directory ‘/home/newusers‘.

[root@localhost ~]# useradd -d /home/newusers solider

You can see the user home directory and other user related information like user id, group id, shell and
comments.

[root@localhost ~]# cat /etc/passwd | grep solider


solider:x:505:505::/home/newusers:/bin/bash

3. Create a User with Specific User ID

In Linux, every user has its own UID (Unique Identification Number). By default, whenever we create a
new user accounts in Linux, it assigns userid 500, 501, 502 and so on…
But, we can create user’s with custom userid with ‘-u‘ option. For example, the following command will
create a user ‘navin‘ with custom userid ‘999‘.

[root@localhost ~]# useradd -u 999 navin

Now, let’s verify that the user created with a defined userid (999) using following command.

[root@localhost ~]# cat /etc/passwd | grep solider


navin:x:999:999::/home/solider:/bin/bash

NOTE: Make sure the value of a user ID must be unique from any other already created users on the
system.
4. Create a User with Specific Group ID

Similarly, every user has its own GID (Group Identification Number). We can create users with specific
group ID’s as well with -g option.

Here in this example, we will add a user ‘tarunika‘ with a specific UID and GID simultaneously with the
help of ‘-u‘ and ‘-g‘ options.

[root@localhost ~]# useradd -u 1000 -g 500 tarunika

Now, see the assigned user id and group id in ‘/etc/passwd‘ file.

[root@localhost ~]# cat /etc/passwd | grep tarunika


tarunika:x:1000:500::/home/tarunika:/bin/bash

5. Add a User to Multiple Groups

The ‘-G‘ option is used to add a user to additional groups. Each group name is separated by a comma,
with no intervening spaces.

Here in this example, we are adding a user ‘solider‘ into multiple groups like admins, webadmin and
developer.

[root@localhost ~]# useradd -G admins,webadmin,developers solider

Next, verify that the multiple groups assigned to the user with id command.

[root@localhost ~]# id solider


uid=1001(solider) gid=1001(solider)
groups=1001(solider),500(admins),501(webadmin),502(developers)
context=root:system_r:unconfined_t:SystemLow-SystemHigh
6. Add a User without Home Directory

In some situations, where we don’t want to assign a home directories for a user’s, due to some security
reasons. In such situation, when a user logs into a system that has just restarted, its home directory will
be root. When such user uses su command, its login directory will be the previous user home directory.

To create user’s without their home directories, ‘-M‘ is used. For example, the following command will
create a user ‘shilpi‘ without a home directory.

[root@localhost ~]# useradd -M shilpi

Now, let’s verify that the user is created without home directory, using ls command.

[root@localhost ~]# ls -l /home/shilpi


ls: cannot access /home/shilpi: No such file or directory

7. Create a User with Account Expiry Date

By default, when we add user’s with ‘useradd‘ command user account never get expires i.e their expiry
date is set to 0 (means never expired).

However, we can set the expiry date using ‘-e‘ option, that sets date in YYYY-MM-DD format. This is
helpful for creating temporary accounts for a specific period of time.

Here in this example, we create a user ‘aparna‘ with account expiry date i.e. 27th April 2014 in YYYY-
MM-DD format.

[root@localhost ~]# useradd -e 2014-03-27 aparna

Next, verify the age of account and password with ‘chage‘ command for user ‘aparna‘ after setting
account expiry date.

[root@localhost ~]# chage -l aparna


Last password change : Mar 28, 2014
Password expires : never
Password inactive : never
Account expires : Mar 27, 2014
Minimum number of days between password change :0
Maximum number of days between password change : 99999
Number of days of warning before password expires :7

8. Create a User with Password Expiry Date


The ‘-f‘ argument is used to define the number of days after a password expires. A value of 0 inactive the
user account as soon as the password has expired. By default, the password expiry value set to -1 means
never expire.

Here in this example, we will set a account password expiry date i.e. 45 days on a user ‘solider’ using ‘-e‘
and ‘-f‘ options.

[root@localhost ~]# useradd -e 2014-04-27 -f 45 solider

9. Add a User with Custom Comments

The ‘-c‘ option allows you to add custom comments, such as user’s full name, phone number, etc to
/etc/passwd file. The comment can be added as a single line without any spaces.

For example, the following command will add a user ‘mansi‘ and would insert that user’s full name,
Manis Khurana, into the comment field.

[root@localhost ~]# useradd -c "Manis Khurana" mansi

You can see your comments in ‘/etc/passwd‘ file in comments section.

[root@localhost ~]# tail -1 /etc/passwd


mansi:x:1006:1008:Manis Khurana:/home/mansi:/bin/sh

10. Change User Login Shell:

Sometimes, we add users which has nothing to do with login shell or sometimes we require to assign
different shells to our users. We can assign different login shells to a each user with ‘-s‘ option.

Here in this example, will add a user ‘solider‘ without login shell i.e. ‘/sbin/nologin‘ shell.

[root@localhost ~]# useradd -s /sbin/nologin solider

You can check assigned shell to the user in ‘/etc/passwd‘ file.

[root@localhost ~]# tail -1 /etc/passwd


solider:x:1002:1002::/home/solider:/sbin/nologin

Part II – 5 Advance Usage of useradd Commands


11. Add a User with Specific Home Directory, Default Shell and Custom Comment

The following command will create a user ‘ravi‘ with home directory ‘/var/www/solider‘, default shell
/bin/bash and adds extra information about user.
[root@localhost ~]# useradd -m -d /var/www/ravi -s /bin/bash -c "Solider Owner" -U ravi

In the above command ‘-m -d‘ option creates a user with specified home directory and the ‘-s‘ option set
the user’s default shell i.e. /bin/bash. The ‘-c‘ option adds the extra information about user and ‘-U‘
argument create/adds a group with the same name as the user.
12. Add a User with Home Directory, Custom Shell, Custom Comment and UID/GID

The command is very similar to above, but here we defining shell as ‘/bin/zsh‘ and custom UID and GID
to a user ‘tarunika‘. Where ‘-u‘ defines new user’s UID (i.e. 1000) and whereas ‘-g‘ defines GID (i.e. 1000).

[root@localhost ~]# useradd -m -d /var/www/tarunika -s /bin/zsh -c "Solider Technical Writer" -u 1000 -g


1000 tarunika

13. Add a User with Home Directory, No Shell, Custom Comment and User ID

The following command is very much similar to above two commands, the only difference is here, that
we disabling login shell to a user called ‘avishek‘ with custom User ID (i.e. 1019).

Here ‘-s‘ option adds the default shell /bin/bash, but in this case we set login to ‘/usr/sbin/nologin‘. That
means user ‘avishek‘ will not able to login into the system.

[root@localhost ~]# useradd -m -d /var/www/avishek -s /usr/sbin/nologin -c "Solider Sr. Technical Writer"


-u 1019 avishek

14. Add a User with Home Directory, Shell, Custom Skell/Comment and User ID

The only change in this command is, we used ‘-k‘ option to set custom skeleton directory i.e.
/etc/custom.skell, not the default one /etc/skel. We also used ‘-s‘ option to define different shell i.e.
/bin/tcsh to user ‘navin‘.

[root@localhost ~]# useradd -m -d /var/www/navin -k /etc/custom.skell -s /bin/tcsh -c "No Active Member


of Solider" -u 1027 navin

15. Add a User without Home Directory, No Shell, No Group and Custom Comment

This following command is very different than the other commands explained above. Here we used ‘-M‘
option to create user without user’s home directory and ‘-N‘ argument is used that tells the system to
only create username (without group). The ‘-r‘ arguments is for creating a system user.

[root@localhost ~]# useradd -M -N -r -s /bin/false -c "Disabled Solider Member" clayton

For more information and options about useradd, run ‘useradd‘ command on the terminal to see
available options.
Read Also: 15 usermod Command Examples
Share
+
0
0
0
Ask Anything
If You Appreciate What We Do Here On Solider, You Should Consider:

Stay Connected to: Twitter | Facebook | Google Plus


Subscribe to our email updates: Sign Up Now
Get your own self-hosted blog with a Free Domain at ($3.95/month).
Become a Supporter - Make a contribution via PayPal
Support us by purchasing our premium books in PDF format.
Support us by taking our online Linux courses

We are thankful for your never ending support.

Tags: adduserlinux usersuseradd

View all Posts

Ravi Saive

I am Ravi Saive, creator of Solider. A Computer Geek and Linux Guru who loves to share tricks and tips
on Internet. Most Of My Servers runs on Open Source Platform called Linux. Follow Me: Twitter,
Facebook and Google+

Your name can also be listed here. Got a tip? Submit it here to become an Solider author.
RedHat RHCE and RHCSA Certification Book
Linux Foundation LFCS and LFCE Certification Preparation Guide

Next story
Fun in Linux Terminal – Play with Word and Character Counts
Previous story
nSnake: A Clone of Old Classic Snake Game – Play in Linux Terminal

You may also like...

Find Number of Files in a Directory and Subdirectories


0
How to Find Number of Files in a Directory and Subdirectories

17 Jan, 2017
Display Command File Contents in Column Format
0
Display Command Output or File Contents in Column Format

6 Feb, 2018
Convert RPM to DEB and DEB to RPM
8
How to Convert From RPM to DEB and DEB to RPM Package Using Alien

26 Aug, 2015

88 Responses

Comments4
Pingbacks0

Decontee K Sawyer
October 26, 2017 at 6:29 pm

Hi Ravi. Your suggestion to go directly to the source documentation to understand the requirements
and details is an exceedingly excellent one. You have obviously done so, and translated the English it is
written in, into whatever your native language is. A link to your interpretation, in your native language
would be more helpful than the confusing broken English found here.
Reply
Anuj
October 16, 2017 at 4:01 pm

Hi Ravi,

I have one problem, from client side I have a request to add a new user with username having space, I
mean username of two words.

For example,

# adduser "ravi gen"


adduser: invalid user name 'ravi gen'

Reply
Ravi Saive
October 25, 2017 at 11:52 am

@Anuj,

That’s not possible, add underscore or dash, like ravi_gen or ravi-gen.


Reply
oscar javier guerrero
October 6, 2017 at 9:19 pm

Hi Ravi, I have a question, If use: su “user” type the password and the system say: su: System Error,
why is this message?
Reply

« Older Comments
Got something to say? Join the discussion.

Comment

Name *

Email *

Website

Notify me of followup comments via e-mail. You can also subscribe without commenting.
Switch Users and Sudo Access:

Switch Users:

Following is the user switch command that can be used to switch from one user to another

 su - username
su - invokes a login shell after switching the user. A login shell resets most environment variables,
providing a clean base.

 su username
just switches the user, providing a normal shell with an environment nearly the same as with the old user

Sudo Access:
 sudo command-name
The above command “sudo command-name” will run any command owned and authorized by root account
as long as that user is authorized to run it in /etc/sudoers file

Configuring sudo Access

1. Log in to the system as the root user.


2. Create a normal user account using the useradd command. Replace USERNAME with
the user name that you wish to create.

# useradd USERNAME

3. Set a password for the new user using the passwd command.
4. # passwd USERNAME
5. Changing password for user USERNAME.
6. New password:
7. Retype new password:
passwd: all authentication tokens updated successfully.

8. Run the visudo to edit the /etc/sudoers file. This file defines the policies applied by
the sudo command.

# visudo

9. Find the lines in the file that grant sudo access to users in the group wheel when enabled.
10. ## Allows people in group wheel to run all commands
# %wheel ALL=(ALL) ALL

11. Remove the comment character (#) at the start of the second line. This enables the
configuration option.
12. Save your changes and exit the editor.
13. Add the user you created to the wheel group using the usermod command.
# usermod -aG wheel USERNAME

14. Test that the updated configuration allows the user you created to run commands using
sudo.
1. Use the su to switch to the new user account that you created.

# su USERNAME -

2. Use the groups to verify that the user is in the wheel group.
3. $ groups
USERNAME wheel

4. Use the sudo command to run the whoami command. As this is the first time you
have run a command using sudo from this user account the banner message will
be displayed. You will be also be prompted to enter the password for the user
account.
5. $ sudo whoami
6. We trust you have received the usual lecture from the local
System
7. Administrator. It usually boils down to these three things:
8.
9. #1) Respect the privacy of others.
10. #2) Think before you type.
11. #3) With great power comes great responsibility.
12.
13. [sudo] password for USERNAME:
root

The last line of the output is the user name returned by the whoami command. If
sudo is configured correctly this value will be root.

You have successfully configured a user with sudo access. You can now log in to this user
account and use sudo to run commands as if you were logged in to the account of the root user.
Linux Editors

 What is a text editor?


o A text editor is a program which enables you to create and manipulate character
data (text) in a computer file.
o A text editor is not a word processor although some text editors do include word
processing facilities.
o Text editors often require "memorizing" commands in order to perform editing
tasks. The more you use them, the easier it becomes. There is a "learning curve"
in most cases though.
 There are several standard text editors available on most LINUX systems:
o ed - standard line editor
o ex - extended line editor
o vi - a visual editor; full screen; uses ed/ex line-mode commands for global file
editing
o sed - stream editor for batch processing of files
 In addition to these, other local "favorites" may be available:
o emacs - a full screen editor and much more
o pico - an easy "beginner's" editor
o lots of others

The Standard Display Editor - vi

 vi supplies commands for:


o inserting and deleting text
o replacing text
o moving around the file
o finding and substituting strings
o cutting and pasting text
o reading and writing to other files
 vi uses a "buffer"
o While using vi to edit an existing file, you are actually working on a copy of the
file that is held in a temporary buffer in your computer's memory.
o If you invoked vi with a new filename, (or no file name) the contents of the file
only exist in this buffer.
o Saving a file writes the contents of this buffer to a disk file, replacing its contents.
You can write the buffer to a new file or to some other file.
o You can also decide not to write the contents of the buffer, and leave your
original file unchanged.
 vi operates in two different "modes":
o Command mode
 vi starts up in this mode
 Whatever you type is interpreted as a command - not text to be inserted
into the file.
 The mode you need to be in if you want to "move around" the file.
o Insert mode
 This is the mode you use to type (insert) text.
 There are several commands that you can use to enter this mode.
 Once in this mode, whatever you type is interpreted as text to be included
in the file. You cannot "move around" the file in this mode.
 Must press the ESC (escape) key to exit this mode and return to command
mode.
Monitor User Commands:
Following are the basic user monitor commands

 who
 last
 w
 id

who
As a Linux user, sometimes it is required to know some basic information like :

 Time of last system boot


 List of users logged-in
 Current run level etc

Though this type of information can be obtained from various files in the Linux system but there
is a command line utility 'who' that does exactly the same for you. In this article, we will discuss
the capabilities and features provided by the 'who' command.

The basic syntax of the who command is :


who [OPTION]... [ FILE | ARG1 ARG2 ]

Examples of 'who' command

1. Get the information on currently logged in users

This is done by simply running the 'who' command (without any options). Consider the
following example:
$ who
iafzal tty7 2012-08-07 05:33 (:0)
iafzal pts/0 2012-08-07 06:47 (:0.0)
iafzal pts/1 2012-08-07 07:58 (:0.0)

2. Get the time of last system boot

The is done using the -b option. Consider the following example:


$ who -b
system boot 2012-08-07 05:32
So we see that the above output gives the exact date and time of last system boot.
3. Get information on system login processes

This is done using the -l option. Consider the following example:


$ who -l
LOGIN tty4 2012-08-07 05:32 1309 id=4
LOGIN tty5 2012-08-07 05:32 1313 id=5
LOGIN tty2 2012-08-07 05:32 1322 id=2
LOGIN tty3 2012-08-07 05:32 1324 id=3
LOGIN tty6 2012-08-07 05:32 1327 id=6
LOGIN tty1 2012-08-07 05:32 1492 id=1
So we see that information related to system login processes was displayed in the output.

4. Get the hostname and user associated with stdin

This is done using the -m option. Consider the following example:


$ who -m
iafzal pts/1 2012-08-07 07:58 (:0.0)
So we see that the relevant information was produced in the output.

5. Get the current run level

This is done using the -r option. Consider the following example:


$ who -r
run-level 2 2012-08-07 05:32
So we see that the information related to current run level (which is 2) was produced in the
output.

6. Get the list of user logged in

This is done using the -u option. Consider the following example:


$ who -u
iafzal tty7 2012-08-07 05:33 old 1619 (:0)
iafzal pts/0 2012-08-07 06:47 00:31 2336 (:0.0)
iafzal pts/1 2012-08-07 07:58 . 2336 (:0.0)
So we see that a list of logged-in users was produced in the output.

7. Get number of users logged-in and their user names

This is done using the -q option. Consider the following example:


$ who -q
iafzal iafzal iafzal
# users=3
So we see that information related to number of logged-in users and their user names was
produced in the output.
8. Get all the information

This is done using the -a option. Consider the following example:


$ who -a
system boot 2012-08-07 05:32
run-level 2 2012-08-07 05:32
LOGIN tty4 2012-08-07 05:32 1309 id=4
LOGIN tty5 2012-08-07 05:32 1313 id=5
LOGIN tty2 2012-08-07 05:32 1322 id=2
LOGIN tty3 2012-08-07 05:32 1324 id=3
LOGIN tty6 2012-08-07 05:32 1327 id=6
LOGIN tty1 2012-08-07 05:32 1492 id=1
iafzal + tty7 2012-08-07 05:33 old 1619 (:0)
iafzal + pts/0 2012-08-07 06:47 . 2336 (:0.0)
iafzal + pts/1 2012-08-07 07:58 . 2336 (:0.0)
So we see that all the information that 'who' can print is produced in output.

last command:

To find out when a particular user last logged in to the Linux or Unix server.

Syntax

The basic syntax is:

last
last [userNameHere]
last [tty]
last [options] [userNameHere]

If no options provided last command displays a list of all users logged in (and out). You can
filter out results by supplying names of users or terminal to show only those entries matching the
username/tty.

last command examples

To find out who has recently logged in and out on your server, type:
$ last
Sample outputs:

root pts/1 10.1.6.120 Tue Jan 28 05:59 still logged in


root pts/0 10.1.6.120 Tue Jan 28 04:08 still logged in
root pts/0 10.1.6.120 Sat Jan 25 06:33 - 08:55 (02:22)
root pts/1 10.1.6.120 Thu Jan 23 14:47 - 14:51 (00:03)
root pts/0 10.1.6.120 Thu Jan 23 13:02 - 14:51 (01:48)
root pts/0 10.1.6.120 Tue Jan 7 12:02 - 12:38 (00:35)

wtmp begins Tue Jan 7 12:02:54 2014

List all users last logged in/out time

last command searches back through the file /var/log/wtmp file and the output may go back to
several months. Just use the less command or more command as follows to display output one
screen at a time:
$ last | more
last | less

List a particular user last logged in

To find out when user iafzal last logged in, type:


$ last iafzal
$ last iafzal | less
$ last iafzal | grep 'Thu Jan 23'

Sample outputs:
Hide hostnames (Linux only)

To hide the display of the hostname field pass -R option:


$ last -R
last -R iafzal
Sample outputs:

Display complete login & logout times

By default year is now displayed by last command. You can force last command to display full
login and logout times and dates by passing -F option:
$ last -F
Sample outputs:
Display full user/domain names
$ last -w

Display last reboot time

The user reboot logs in each time the system is rebooted. Thus following command will show a
log of all reboots since the log file was created:
$ last reboot
$ last -x reboot

Display last shutdown time

Find out the system shutdown entries and run level changes:
$ last -x
$ last -x shutdown

Find out who was logged in at a particular time

The syntax is as follows to see the state of logins as of the specified time:
$ last -t YYYYMMDDHHMMSS
$ last -t YYYYMMDDHHMMSS userNameHere

w command:
Options:
-h, --no-header do not print header
-u, --no-current ignore current process username
-s, --short short format
-f, --from show remote hostname field
-o, --old-style old style output
-i, --ip-addr display IP address instead of hostname (if possible)

--help display this help and exit


-V, --version output version information and exit

id command:
Print user and group information for the specified USER,
or (when USER omitted) for the current user.
-a ignore, for compatibility with other versions
-Z, --context print only the security context of the current user
-g, --group print only the effective group ID
-G, --groups print all group IDs
-n, --name print a name instead of a number, for -ugG
-r, --real print the real ID instead of the effective ID, with -ugG
-u, --user print only the effective user ID
-z, --zero delimit entries with NUL characters, not whitespace;
not permitted in default format
--help display this help and exit
--version output version information and exit
System Utility Commands:

 date
 uptime
 hostname
 uname
 which
 cal
 bc

date
Print or set the system date and time

Usage: date [OPTION]... [+FORMAT]


or: date [-u|--utc|--universal] [MMDDhhmm[[CC]YY][.ss]]
Display the current time in the given FORMAT, or set the system date.

Mandatory arguments to long options are mandatory for short options too.
-d, --date=STRING display time described by STRING, not 'now'
-f, --file=DATEFILE like --date once for each line of DATEFILE
-I[TIMESPEC], --iso-8601[=TIMESPEC] output date/time in ISO 8601 format.
TIMESPEC='date' for date only (the default),
'hours', 'minutes', 'seconds', or 'ns' for date
and time to the indicated precision.
-r, --reference=FILE display the last modification time of FILE
-R, --rfc-2822 output date and time in RFC 2822 format.
Example: Mon, 07 Aug 2006 12:34:56 -0600
--rfc-3339=TIMESPEC output date and time in RFC 3339 format.
TIMESPEC='date', 'seconds', or 'ns' for
date and time to the indicated precision.
Date and time components are separated by
a single space: 2006-08-07 12:34:56-06:00
-s, --set=STRING set time described by STRING
-u, --utc, --universal print or set Coordinated Universal Time (UTC)
--help display this help and exit
--version output version information and exit

uptime:
Tell how long the system has been running
uptime gives a one line display of the following information. The current time, how long the system has
been running, how many users are currently logged on, and the system load averages for the past 1, 5,
and 15 minutes

Options:
-p, --pretty show uptime in pretty format
-h, --help display this help and exit
-s, --since system up since
-V, --version output version information and exit

hostname
Show or set the system's host name

Program options:
-a, --alias alias names
-A, --all-fqdns all long host names (FQDNs)
-b, --boot set default hostname if none available
-d, --domain DNS domain name
-f, --fqdn, --long long host name (FQDN)
-F, --file read host name or NIS domain name from given file
-i, --ip-address addresses for the host name
-I, --all-ip-addresses all addresses for the host
-s, --short short host name
-y, --yp, --nis NIS/YP domain name

Description:
This command can get or set the host name or the NIS domain name. You can
also get the DNS domain or the FQDN (fully qualified domain name).
Unless you are using bind or NIS for host lookups you can change the
FQDN (Fully Qualified Domain Name) and the DNS domain name (which is
part of the FQDN) in the /etc/hosts file

uname
This command will give you system information. It is one of the important command that should be
used every time you login to a Linux/Unix machine.

Usage: uname [OPTION]...


Print certain system information. With no OPTION, same as -s.

-a, --all print all information, in the following order,


except omit -p and -i if unknown:
-s, --kernel-name print the kernel name
-n, --nodename print the network node hostname
-r, --kernel-release print the kernel release
-v, --kernel-version print the kernel version
-m, --machine print the machine hardware name
-p, --processor print the processor type or "unknown"
-i, --hardware-platform print the hardware platform or "unknown"
-o, --operating-system print the operating system
--help display this help and exit
--version output version information and exit

which
Shows the full path of (shell) commands

Usage: /usr/bin/which [options] [--] COMMAND [...]


Write the full path of COMMAND(s) to standard output.

--version, -[vV] Print version and exit successfully.


--help, Print this help and exit successfully.
--skip-dot Skip directories in PATH that start with a dot.
--skip-tilde Skip directories in PATH that start with a tilde.
--show-dot Don't expand a dot to current directory in output.
--show-tilde Output a tilde for HOME directory for non-root.
--tty-only Stop processing options on the right if not on tty.
--all, -a Print all matches in PATH, not just the first
--read-alias, -i Read list of aliases from stdin.
--skip-alias Ignore option --read-alias; don't read stdin.
--read-functions Read shell functions from stdin.
--skip-functions Ignore option --read-functions; don't read stdin.

cal and bc
cal command is simply for calendar and bc is for calculator
Processes
 Whenever you enter a command at the shell prompt, it invokes a program. While this
program is running it is called a process. Your login shell is also a process, created for
you upon logging in and existing until you logout.
 LINUX is a multi-tasking operating system. Any user can have multiple processes
running simultaneously, including multiple login sessions. As you do your work within
the login shell, each command creates at least one new process while it executes.
 Process id: every process in a LINUX system has a unique PID - process identifier.
 ps - displays information about processes. Note that the ps command differs between
different LINUX systems - see the local ps man page for details.
To see your current shell's processes:

% ps
PID TTY TIME CMD
26450 pts/9 0:00 ps
66801 pts/9 0:00 -csh

To see a detailed list of all of your processes on a machine (current shell and all other
shells):

% ps uc
USER PID %CPU %MEM SZ RSS TTY STAT STIME TIME COMMAND
jsmith 26451 0.0 0.0 120 232 pts/9 R 21:01:14 0:00 ps
jsmith 43520 0.0 1.0 300 660 pts/76 S 19:18:31 0:00 elm
jsmith 66801 0.0 1.0 348 640 pts/9 S 20:49:20 0:00 csh
jsmith 112453 0.0 0.0 340 432 pts/76 S Mar 03 0:00 csh

To see a detailed list of every process on a machine:

% ps ug
USER PID %CPU %MEM SZ RSS TTY STAT STIME TIME COMMAND
root 0 0.0 0.0 8 8 - S Feb 08 32:57 swapper
root 1 0.1 0.0 252 188 - S Feb 08 39:16 /etc/init
root 514 72.6 0.0 12 8 - R Feb 08 28984:05 kproc
root 771 0.2 0.0 16 16 - S Feb 08 65:14 kproc
root 1028 0.0 0.0 16 16 - S Feb 08 0:00 kproc
{ lines deleted }
root 60010 0.0 0.0 1296 536 - S Mar 07 0:00 -ncd19:0
kdr 60647 0.0 0.0 288 392 pts/87 S Mar 06 0:00 -ksh
manfield 60968 0.0 0.0 268 200 - S 10:12:52 0:00 mwm
kelly 61334 0.0 0.0 424 640 - S 08:18:10 0:00 twm
sjw 61925 0.0 0.0 552 376 - S Mar 06 0:00 rlogin kanaha
mkm 62357 0.0 0.0 460 240 - S Feb 08 0:00 xterm
ishley 62637 0.0 0.0 324 152 pts/106 S Mar 06 0:00 xedit march2
tusciora 62998 0.0 0.0 340 448 - S Mar 06 0:05 xterm -e
dilfeath 63564 0.0 0.0 200 268 - S 07:32:45 0:00 xclock
tusciora 63878 0.0 0.0 548 412 - S Mar 06 0:41 twm

 kill - use the kill command to send a signal to a process. In most cases, this will be a kill
signal, hence the command name. However, other types of signals are usually
supported. Note that you can only kill processes which you own. The command syntax
is:
kill [-signal] process_identifier(PID)

Examples:

kill 63878 - kills process 63878


kill -9 1225 - kills (kills!) process 1225. Use if
simple kill doesn't work.
kill -STOP 2339 - stops process 2339
kill -CONT 2339 - continues stopped process 2339
kill -l - list the supported kill signals

You can also use CTRL-C to kill the currently running process.
 Suspend a process: Use CTRL-Z.
 Background a process: Normally, commands operate in the foreground - you can not do
additional work until the command completes. Backgrounding a command allows you
to continue working at the shell prompt.
To start a job in the background, use an ampersand (&) when you invoke the command:
myprog &

To put an already running job in the background, first suspend it with CRTL-Z and then
use the "bg" command:

myprog - execute a process


CTRL-Z - suspend the process
bg - put suspended process in background

 Foreground a process: To move a background job to the foreground, find its "job"
number and then use the "fg" command. In this example, the jobs command shows that
two processes are running in the background. The fg command is used to bring the
second job (%2) to the foreground.

jobs
[1] + Running xcalc
[2] Running find / -name core -print
fg %2

 Stop a job running in the background: Use the jobs command to find its job number, and
then use the stop command. You can then bring it to the foreground or restart execution
later.

jobs
[1] + Running xcalc
[2] Running find / -name core -print
stop %2
 Kill a job running in the background, use the jobs command to find its job number, and
then use the kill command. Note that you can also use the ps and kill commands to
accomplish the same task.

jobs
[1] + Running xcalc
[2] Running find / -name core -print
kill %2

 Some notes about background processes:


o If a background job tries to read from the terminal, it will automatically be
stopped by the shell. If this happens, you must put it in the foreground to supply
the input.
o The shell will warn you if you attempt to logout and jobs are still running in the
background. You can then use the jobs command to review the list of jobs and act
accordingly. Alternately, you can simply issue the logout command again and
you will be permitted to exit
Linux Programs

A program, or command, interacts with the kernel to provide the environment and perform the
functions called for by the user. A program can be: an executable shell file, known as a shell script; a
built-in shell command; or a source compiled, object code file.
The shell is a command line interpreter. The user interacts with the kernel through the shell. You can
write ASCII (text) scripts to be acted upon by a shell.

System programs are usually binary, having been compiled from C source code. These are located in
places like /bin, /usr/bin, /usr/local/bin, /usr/ucb, etc. They provide the functions that you normally
think of when you think of Linux. Some of these are sh, csh, date, who, more, and there are many
others.
crontab – Quick Reference
crontab is used to schedule task/jobs

Setting up cron jobs in Unix, Solaris & Linux


cron is a Unix, solaris, Linux utility that allows tasks to be automatically run in the background
at regular intervals by the cron daemon.

cron meaning – There is no definitive explanation but most accepted answers is reportedly from
Ken Thompson ( author of unix cron ), name cron comes from chron ,the Greek prefix for
‘time.’.
What is cron ? – Cron is a daemon which runs at the times of system boot from /etc/init.d
scripts. If needed it can be stopped/started/restart using init script or with command service crond
start in Linux systems.

This document covers following aspects of Unix, Linux cron jobs to help you understand
and implement cronjobs successfully

1. What is crontab?
2. What is a cron job or cron schedule?
3. Crontab Restrictions
4. Crontab Commands
5. Crontab file – syntax
6. Crontab Example
7. Crontab Environment
8. Disable Email
9. Generate log file for crontab activity
10. Crontab file location

1. What is crontab?

Crontab (CRON TABle) is a file which contains the schedule of cron entries to be run and at
specified times. File location varies by operating systems, See Crontab file location at the end of
this document.

2.What is a cron job or cron schedule?

Cron job or cron schedule is a specific set of execution instructions specifing day, time and
command to execute. crontab can have multiple execution statments.

3. Crontab Restrictions
You can execute crontab if your name appears in the file /usr/lib/cron/cron.allow. If that file does
not exist, you can use
crontab if your name does not appear in the file /usr/lib/cron/cron.deny.
If only cron.deny exists and is empty, all users can use crontab. If neither file exists, only the root
user can use crontab. The allow/deny files consist of one user name per line.

4. Crontab Commands

export EDITOR=vi ;to specify a editor to open crontab file.

crontab -e Edit crontab file, or create one if it doesn’t already exist.


crontab -l crontab list of cronjobs , display crontab file contents.
crontab -r Remove your crontab file.
crontab -v Display the last time you edited your crontab file. (This option is only available on
a few systems.)

5. Crontab file

Crontab syntax :
A crontab file has five fields for specifying day , date and time followed by the command to be
run at that interval.

* * * * * command to be executed
- - - - -
| | | | |
| | | | +----- day of week (0 - 6)
(Sunday=0)
| | | +------- month (1 - 12)
| | +--------- day of month (1 - 31)
| +----------- hour (0 - 23)
+------------- min (0 - 59)

* in the value field above means all legal values as in braces for that column.
The value column can have a * or a list of elements separated by commas. An element is either a
number in the ranges shown above or two numbers in the range separated by a hyphen (meaning
an inclusive range).
Notes
A. ) Repeat pattern like /2 for every 2 minutes or /10 for every 10 minutes is not supported by all
operating systems. If you try to use it and crontab complains it is probably not supported.

B.) The specification of days can be made in two fields: month day and weekday. If both are
specified in an entry, they are cumulative meaning both of the entries will get executed .

6. Crontab Examples

A line in crontab file like below removes the tmp files from /home/someuser/tmp each day at
6:30 PM.
30 18 * * * rm /home/someuser/tmp/*

Changing the parameter values as below will cause this command to run at different time
schedule below :

min hour day/month month day/week Execution time

— 00:30 Hrs on 1st of Jan, June &


30 0 1 1,6,12 *
Dec.
–8.00 PM every weekday (Mon-Fri) only in Oct.
0 20 * 10 1-5

— midnight on 1st ,10th & 15th of month


0 0 1,10,15 * *

— At 12.05,12.10 every Monday & on 10th of


5,10 0 10 * 1
every month
:

Note : If you inadvertently enter the crontab command with no argument(s), do not attempt to
get out with Control-d. This removes all entries in your crontab file. Instead, exit with Control-c.

7. Crontab Environment

cron invokes the command from the user’s HOME directory with the shell, (/usr/bin/sh).
cron supplies a default environment for every shell, defining:
HOME=user’s-home-directory
LOGNAME=user’s-login-id
PATH=/usr/bin:/usr/sbin:.
SHELL=/usr/bin/sh

Users who desire to have their .profile executed must explicitly do so in the crontab entry or in a
script called by the entry.

8. Disable Email

By default cron jobs sends a email to the user account executing the cronjob. If this is not needed
put the following command At the end of the cron job line .

>/dev/null 2>&1

9. Generate log file

To collect the cron execution execution log in a file :

30 18 * * * rm /home/someuser/tmp/* > /home/someuser/cronlogs/clean_tmp_dir.log

10. Crontab file location


User crontab files are stored by the login names in different locations in different Unix and Linux
flavors. These files are useful for backing up, viewing and restoring but should be edited only
with crontab command by the users.

 Mac OS X
/usr/lib/cron/tabs/
 BSD Unix
/var/cron/tabs/
 Solaris, HP-UX, Debian, Ubuntu
/var/spool/cron/crontabs/
 AIX, Red Hat Linux, CentOS, Ferdora
/var/spool/cron/
System Resources Commands:

Command/Syntax What it will do


date report the current date and time
df report the summary of disk blocks and inodes free and in use
du report amount of disk space in use+
hostname/uname display or set (super-user only) the name of the current machine
passwd set or change your password
whereis report the binary, source, and man page locations for the command
which reports the path to the command or the shell alias in use
who or w report who is logged in and what processes are running
cal displays a calendar
bc Calculator

df - summarize disk block and file usage


df is used to report the number of disk blocks and inodes used and free for each file system. The
output format and valid options are very specific to the OS and program version in use.

Syntax
df [options] [resource]
Common Options
-l local file systems only (SVR4)
-k report in kilobytes (SVR4)

du - report disk space in use


du reports the amount of disk space in use for the files or directories you specify.

Syntax
du [options] [directory or file]
Common Options
-a display disk usage for each file, not just subdirectories
-s display a summary total only
-k report in kilobytes (SVR4)

who - list current users


who reports who is logged in at the present time.

Syntax
who [am i]
Examples
> who
wmtell ttyp1 Apr 21 20:15 (apple.acs.ohio-s)
fbwalk ttyp2 Apr 21 23:21 (worf.acs.ohio-st)
stwang ttyp3 Apr 21 23:22 (127.99.25.8)

whereis - report program locations


whereis reports the filenames of source, binary, and manual page files associated with command(s).

Syntax
whereis [options] command(s)
Common Options
-b report binary files only
-m report manual sections only
-s report source files only
Examples
> whereis Mail
Mail: /usr/ucb/Mail /usr/lib/Mail.help /usr/lib/Mail.rc /usr/man/man1/Mail.1
> whereis -b Mail
Mail: /usr/ucb/Mail /usr/lib/Mail.help /usr/lib/Mail.rc
> whereis -m Mail
Mail: /usr/man/man1/Mail.1

which - report the command found


which will report the name of the file that is be executed when the command is invoked. This will be
the full path name or the alias that’s found first in your path.

Syntax
which command(s)
example--
> which Mail
/usr/ucb/Mail

hostname/uname –n = name of machine


hostname (uname -n on SysV) reports the host name of the machine the user is logged into, e.g.:
> hostname
yourcomputername

uname has additional options to print information about system hardware type and software version.
date - current date and time
date displays the current data and time. A superuser can set the date and time.

Syntax
date [options] [+format]
Common Options
-u use Universal Time (or Greenwich Mean Time)
+format specify the output format
%a weekday abbreviation, Sun to Sat
%h month abbreviation, Jan to Dec
%j day of year, 001 to 366
%n <new-line>
%t <TAB>
%y last 2 digits of year, 00 to 99
%D MM/DD/YY date
%H hour, 00 to 23
%M minute, 00 to 59
%S second, 00 to 59
%T HH:MM:SS time
Examples
> date
Mon Jun 10 09:01:05 EDT 1996
> date -u
Mon Jun 10 13:01:33 GMT 1996
> date +%a%t%D
Mon 06/10/96
> date '+%y:%j'
96:162
Terminal Control Keys

 Several key combinations on your keyboard usually have a special effect on the
terminal.
 These "control" (CTRL) keys are accomplished by holding the CTRL key while typing
the second key. For example, CTRL-c means to hold the CTRL key while you type the
letter "c".
 The most common control keys are listed below:

CTRL-u - erase everything you've typed on the command


line

CTRL-c - stop/kill a command

CTRL-h - backspace (usually)

CTRL-z - suspend a command

CTRL-s - stop the screen from scrolling

CTRL-q - continue scrolling

CTRL-d - exit from an interactive program (signals end


of data)
top command

Know what is happening in “real time” on your systems is in my opinion the basis to use and
optimize your OS. The top command can help us, this is a very useful system monitor that is
really easy to use, and that can also allows us to understand why our OS suffers and which
process use most resources. The command to be run on the terminal is:

$ top

And we’ll get a screen similar to the one on the right:

Let’s see now every single row of this output to explain all the information found within the
screen.

1° Row — top

This first line indicates in order:

 current time (11:37:19)


 uptime of the machine (up 1 day, 1:25)
 users sessions logged in (3 users)
 average load on the system (load average: 0.02, 0.12, 0.07) the 3 values refer to the last
minute, five minutes and 15 minutes.

2° Row – task

The second row gives the following information:

 Processes running in totals (73 total)


 Processes running (2 running)
 Processes sleeping (71 sleeping)
 Processes stopped (0 stopped)
 Processes waiting to be stoppati from the parent process (0 zombie)

3° Row – cpu
The third line indicates how the cpu is used. If you sum up all the percentages the total will be
100% of the cpu. Let’s see what these values indicate in order:

 Percentage of the CPU for user processes (0.3%us)


 Percentage of the CPU for system processes (0.0%sy)
 Percentage of the CPU processes with priority upgrade nice (0.0%ni)
 Percentage of the CPU not used (99,4%id)
 Percentage of the CPU processes waiting for I/O operations(0.0%wa)
 Percentage of the CPU serving hardware interrupts (0.3% hi — Hardware IRQ
 Percentage of the CPU serving software interrupts (0.0% si — Software Interrupts
 The amount of CPU ‘stolen’ from this virtual machine by the hypervisor for other tasks
(such as running another virtual machine) this will be 0 on desktop and server without
Virtual machine. (0.0%st — Steal Time)

4° and 5° Rows – memory usage

The fourth and fifth rows respectively indicate the use of physical memory (RAM) and swap. In
this order: Total memory in use, free, buffers cached.

Following Rows — Processes list

And as last thing ordered by CPU usage (as default) there are the processes currently in use.
Let’s see what information we can get in the different columns:
 PID – l’ID of the process(4522)
 USER – The user that is the owner of the process (root)
 PR – priority of the process (15)
 NI – The “NICE” value of the process (0)
 VIRT – virtual memory used by the process (132m)
 RES – physical memory used from the process (14m)
 SHR – shared memory of the process (3204)
 S – indicates the status of the process: S=sleep R=running Z=zombie (S)
 %CPU – This is the percentage of CPU used by this process (0.3)
 %MEM – This is the percentage of RAM used by the process (0.7)
 TIME+ –This is the total time of activity of this process (0:17.75)
 COMMAND – And this is the name of the process (bb_monitor.pl)

Conclusions

Now that we have seen in detail all the information that the command “top” returns, it will be
easier to understand the reason of excessive load and/or the slowing of the system
Recover/Reset Root Password

1 – In the boot grub menu select option to edit

2 – Select Option to edit (e).


3 – Go to the line = ro and change it with rw init=/sysroot/bin/sh

4 – Now press Control+x to start on single user mode

5 – Now access the system with this command.


chroot /sysroot

6 – Reset the password.


passwd root

7 – Exit chroot
exit
8 - Reboot your system
reboot
SIGHUP - The SIGHUP signal disconnects a process from the parent process. This an also be
used to restart processes. For example, "killall -SIGUP compiz" will restart Compiz. This is useful
for daemons with memory leaks.

SIGINT - This signal is the same as pressing ctrl-c. On some systems, "delete" + "break" sends the
same signal to the process. The process is interrupted and stopped. However, the process can ignore
this signal.

SIGQUIT - This is like SIGINT with the ability to make the process produce a core dump.

SIGILL - When a process performs a faulty, forbidden, or unknown function, the system sends the
SIGILL signal to the process. This is the ILLegal SIGnal.

SIGTRAP - This signal is used for debugging purposes. When a process has performed an action or
a condition is met that a debugger is waiting for, this signal will be sent to the process.

SIGABRT - This kill signal is the abort signal. Typically, a process will initiate this kill signal on
itself.

SIGBUS - When a process is sent the SIGBUS signal, it is because the process caused a bus error.
Commonly, these bus errors are due to a process trying to use fake physical addresses or the process
has its memory alignment set incorrectly.

SIGFPE - Processes that divide by zero are killed using SIGFPE. Imagine if humans got the death
penalty for such math. NOTE: The author of this article was recently drug out to the street and shot
for dividing by zero.

SIGKILL - The SIGKILL signal forces the process to stop executing immediately. The program
cannot ignore this signal. This process does not get to clean-up either.

SIGUSR1 - This indicates a user-defined condition. This signal can be set by the user by
programming the commands in sigusr1.c. This requires the programmer to know C/C++.

SIGSEGV - When an application has a segmentation violation, this signal is sent to the process.

SIGUSR2 - This indicates a user-defined condition.

SIGPIPE - When a process tries to write to a pipe that lacks an end connected to a reader, this
signal is sent to the process. A reader is a process that reads data at the end of a pipe.

SIGALRM - SIGALRM is sent when the real time or clock time timer expires.
SIGTERM - This signal requests a process to stop running. This signal can be ignored. The process
is given time to gracefully shutdown. When a program gracefully shuts down, that means it is given
time to save its progress and release resources. In other words, it is not forced to stop. SIGINT is
very similar to SIGTERM.

SIGCHLD - When a parent process loses its child process, the parent process is sent the
SIGCHLD signal. This cleans up resources used by the child process. In computers, a child process
is a process started by another process know as a parent.

SIGCONT - To make processes continue executing after being paused by the SIGTSTP or
SIGSTOP signal, send the SIGCONT signal to the paused process. This is the CONTinue SIGnal.
This signal is beneficial to Unix job control (executing background tasks).

SIGSTOP - This signal makes the operating system pause a process's execution. The process cannot
ignore the signal.

SIGTSTP - This signal is like pressing ctrl-z. This makes a request to the terminal containing the
process to ask the process to stop temporarily. The process can ignore the request.

SIGTTIN - When a process attempts to read from a tty (computer terminal), the process receives
this signal.

SIGTTOU - When a process attempts to write from a tty (computer terminal), the process receives
this signal.

SIGURG - When a process has urgent data to be read or the data is very large, the SIGURG signal
is sent to the process.

SIGXCPU - When a process uses the CPU past the allotted time, the system sends the process this
signal. SIGXCPU acts like a warning; the process has time to save the progress (if possible) and
close before the system kills the process with SIGKILL.

SIGXFSZ - Filesystems have a limit to how large a file can be made. When a program tries to
violate this limit, the system will send that process the SIGXFSZ signal.

SIGVTALRM - SIGVTALRM is sent when CPU time used by the process elapses.

SIGPROF - SIGPROF is sent when CPU time used by the process and by the system on behalf of
the process elapses.

SIGWINCH - When a process is in a terminal that changes its size, the process receives this signal.
SIGIO - Alias to SIGPOLL or at least behaves much like SIGPOLL.

SIGPWR - Power failures will cause the system to send this signal to processes (if the system is still
on).

SIGSYS - Processes that give a system call an invalid parameter will receive this signal.

SIGRTMIN* - This is a set of signals that varies between systems. They are labeled
SIGRTMIN+1, SIGRTMIN+2, SIGRTMIN+3, ......., and so on (usually up to 15). These are user-
defined signals; they must be programmed in the Linux kernel's source code. That would require the
user to know C/C++.

SIGRTMAX* - This is a set of signals that varies between systems. They are labeled SIGRTMAX-
1, SIGRTMAX-2, SIGRTMAX-3, ......., and so on (usually up to 14). These are user-defined signals;
they must be programmed in the Linux kernel's source code. That would require the user to know
C/C++.

SIGEMT - Processes receive this signal when an emulator trap occurs.

SIGINFO - Terminals may sometimes send status requests to processes. When this happens,
processes will also receive this signal.

SIGLOST - Processes trying to access locked files will get this signal.

SIGPOLL - When a process causes an asynchronous I/O event, that process is sent the SIGPOLL
signal.
UNIX Kernel:

Technically speaking, the UNIX kernel "is" the operating system. It provides the basic full time software
connection to the hardware. By full time, it means that the kernel is always running while the computer
is turned on. When a system boots up, kernel is loaded. Likewise, the kernel is only exited when the
computer is turned off.

The UNIX kernel is built specifically for a machine when it is installed. It has a record of all the pieces of
hardware it needs to talk to and knows what languages they speak (how to turn switches on and off to
get a desired result). Thus, a kernel is not easily ported to another computer. Each individual computer
will have its own tailor- made kernel. If the computer's hardware configuration changes during its life,
the kernel must be "rebuilt" (told about the new pieces of hardware).

However, though the connection between the kernel and the hardware is "hardcoded" to a specific
machine, the connection between the user and the kernel is generic. That is the beauty of the UNIX
kernel. From your perspective, regardless of how the kernel interacts with the hardware, no matter which
UNIX computer you use, you will have the same kernel interface to work with. That is because the
hardware is "hidden" by the kernel.

The kernel also handles memory management, input and output requests, and process scheduling for
time-shared operations (we'll talk more about what this means later).
To help it with its work, the kernel also executes daemon programs which stay alive as long as the
machine is turned on and help perform tasks such as printing or serving web documents.

However, the task of hiding the hardware is a pretty much full time job for the kernel. As such, it does
not have too much time to provide for a fancy user-friendly interface. Thus, though the kernel is much
easier to talk to than the hardware, the language of the kernel is still pretty cryptic.

Fortunately, the UNIX operating system has built in "shells" which wrap around the kernel and provide a
much more user-friendly interface. Let's take a look at shells.
Shells

The shell sits between you and the kernel, acting as a command interpreter. It reads your terminal input
and translates the commands into actions taken by the system. The shell is analogous to command in
DOS. When you log into the system you are given a default shell. When the shell starts up it reads its
startup files and may set environment variables, command search paths, and command aliases, and
executes any commands specified in these files.

The original shell was the Bourne shell, sh. Every Linux platform will either have the Bourne shell, or a
Bourne compatible shell available. It has very good features for controlling input and output, but is not
well suited for the interactive user. To meet the latter need the C shell, csh, was written and is now found
on most, but not all, Linux systems. It uses C type syntax, the language Unix is written in, but has a more
awkward input/output implementation. It has job control, so that you can reattach a job running in the
background to the foreground. It also provides a history feature which allows you to modify and repeat
previously executed commands.

The default prompt for the Bourne shell is $ (or #, for the root user). The default prompt for C shell is %.

Numerous other shells are available from the network. Almost all of them are based on either sh or csh
with extensions to provide job control to sh, allow in-line editing of commands, page through previously
executed commands, provide command name completion and custom prompt, etc. Some of the more
well known of these may be on your favorite Linux system: the Korn shell, ksh, by David Korn and the
Bourne Again Shell, bash, from the Free Software Foundations GNU project, both based on sh, the T-C
shell, tcsh, and the extended C shell, cshe, both based on csh. Below we will describe some of the features
of sh and csh so that you can get started.

Built-in Commands
The shells have a number of built-in, or native commands. These commands are executed directly in the
shell and don’t have to call another program to be run. These built-in commands are different for the
different shells.

sh
For the Bourne shell some of the more commonly used built-in commands are:
: null command
. source (read and execute) commands from a file
case case conditional loop
cd change the working directory (default is $HOME)
echo write a string to standard output
eval evaluate the given arguments and feed the result back to the shell
exec execute the given command, replacing the current shell
exit exit the current shell
export share the specified environment variable with subsequent shells
for for conditional loop
if if conditional loop
pwd print the current working directory
read read a line of input from stdin
set set variables for the shell
test evaluate an expression as true or false
trap trap for a typed signal and execute commands
umask set a default file permission mask for new files
unset unset shell variables
wait wait for a specified process to terminate
while while conditional loop

csh
For the C shell the more commonly used built-in functions are:
alias assign a name to a function
bg put a job into the background
cd change the current working directory
echo write a string to stdout
eval evaluate the given arguments and feed the result back to the shell
exec execute the given command, replacing the current shell
exit exit the current shell
fg bring a job to the foreground
foreach for conditional loop
glob do filename expansion on the list, but no "\" escapes are honored
history print the command history of the shell
if if conditional loop
jobs list or control active jobs
kill kill the specified process
limit set limits on system resources
logout terminate the login shell
nice command lower the scheduling priority of the process, command
nohup command do not terminate command when the shell exits
set set a shell variable
setenv set an environment variable for this and subsequent shells
stop stop the specified background job
umask set a default file permission mask for new files
unalias remove the specified alias name
unset unset shell variables
while while conditional loop
Environment Variables

Environmental variables are used to provide information to the programs you use. You can have both
global environment and local shell variables. Global environment variables are set by your login
shell and new programs and shells inherit the environment of their parent shell. Local shell variables
are used only by that shell and are not passed on to other processes. A child process cannot pass a
variable back to its parent process.

The current environment variables are displayed with the "env" or "printenv" commands. Some
common ones are:

• DISPLAY The graphical display to use, e.g. nyssa:0.0


• EDITOR The path to your default editor, e.g. /usr/bin/vi
• GROUP Your login group, e.g. staff
• HOME Path to your home directory, e.g. /home/frank
• HOST The hostname of your system, e.g. nyssa
• IFS Internal field separators, usually any white space (defaults to tab, space
and <newline>)
• LOGNAME The name you login with, e.g. frank
• PATH Paths to be searched for commands, e.g. /usr/bin:/usr/ucb:/usr/local/bin
• PS1 The primary prompt string, Bourne shell only (defaults to $)
• PS2 The secondary prompt string, Bourne shell only (defaults to >)
• SHELL The login shell you’re using, e.g. /usr/bin/csh
• TERM Your terminal type, e.g. xterm
• USER Your username, e.g. frank

Many environment variables will be set automatically when you login. You can modify them or define
others with entries in your startup files or at any time within the shell. Some variables you might want
to change are PATH and DISPLAY. The PATH variable specifies the directories to be automatically
searched for the command you specify. Examples of this are in the shell startup scripts below.
You set a global environment variable with a command similar to the following for the C shell:
% setenv NAME value
and for Bourne shell:
$ NAME=value; export NAME
You can list your global environmental variables with the env or printenv commands. You unset them
with the unsetenv (C shell) or unset (Bourne shell) commands.
To set a local shell variable use the set command with the syntax below for C shell. Without options
set displays all the local variables.
% set name=value
For the Bourne shell set the variable with the syntax:
$ name=value
The current value of the variable is accessed via the "$name", or "${name}", notation
Shell

Whenever you login to a Linux system you are placed in a shell program. The shell's prompt is
usually visible at the cursor's position on your screen. To get your work done, you enter
commands at this prompt.

The shell is a command interpreter; it takes each command and passes it to the operating
system kernel to be acted upon. It then displays the results of this operation on your screen.

Several shells are usually available on any UNIX system, each with its own strengths and
weaknesses.

Different users may use different shells. Initially, your system administrator will supply a
default shell, which can be overridden or changed. The most commonly available shells are:
Bourne shell (sh)
 C shell (csh)
 Korn shell (ksh)
 TC Shell (tcsh)
 Bourne Again Shell (bash)

Each shell also includes its own programming language. Command files, called "shell scripts"
are used to accomplish a series of tasks.

Shell Script Execution


Once a shell script is created, there are several ways to execute it. However, before any Korn
shell script be can executed, it must be assigned the proper permissions. The chmod command
changes permissions on individual files, in this case giving execute permission to the simple file:

$ chmod +x simple

After that, you can execute the script by specifying the filename as an argument to the bash
command:

$ bash simple
You can also execute scripts by just typing its name alone. However, for that method to work,
the directory containing the script must be defined in your PATH variable. When looking at
your .profile earlier in the course, you may have noticed that the PATH=$PATH:$HOME
definition was already in place. This enables you to run scripts located in your home directory
($HOME) without using the ksh command. For instance, because of that pre-defined PATH
variable, the simple script can be run from the command line like this:

$ simple

(For the purposes of this course, we'll simplify things by running all scripts by their script name
only, not as an argument to the ksh command.)

You can also invoke the script from your current shell by opening a background subprocess - or
subshell - where the actual command processing will occur. You won't see it running, but it will
free up your existing shell so you can continue working. This is really only necessary when
running long, processing-intensive scripts that would otherwise take over your current shell
until they complete.

To run the script you created in the background, invoke it this way:

$ simple &

When the script completes, you'll see output similar to this in the current shell:

[1] - Done (127) simple

It is important to understand that Korn shell scripts run in a somewhat different way than they
would in other shells. Specifically, variables defined in the Korn shell aren't understood outside
of the defining - or parent - shell. They must be explicitly exported from the parent shell to work
in a subsequent script or subshell. If you use the export or typeset -x commands to make the
variable known outside the parent shell, any subshell will automatically inherit the values you
defined for the parent shell.

For example, here's a script named lookmeup that does nothing more than print a line to
standard output using the myaddress (defined as 123 Anystreet USA) variable:
$ cat lookmeup
print "I live at $myaddress"

If you open a new shell (using the ksh command) from the parent shell and run the script, you
see that myaddress is undefined:

$ ksh
$ lookmeup
I live at

However, if you export myaddress from the parent shell:

$ exit
$ export myaddress

and then open a new shell and run the lookmeup script again, the variable is now defined:

$ ksh
$ lookmeup
I live at 123 Anystreet USA

To illustrate further how the parent shell takes processing precedence, let's change the value of
myaddress in the subshell:

$ myaddress='Houston, Texas'

$ print $myaddress
Houston, Texas

Now, if you exit the new shell and go back to the parent shell and type the same command:

$ exit
$ print $myaddress
123 Anystreet USA
you see that the original value in the parent shell was not affected by what you did in the
subshell.

A way to export variables automatically is to use the set -o allexport command. This command
cannot export variables to the parent shell from a subshell, but can export variables created in
the parent shell to all subshells created after the command is run. Likewise, it can automatically
export variables created in subshells to new subshells created after running the command. set -o
allexport is a handy command to place in your .kshrc file.

Shell Scripts - An Illustrated View


At the risk of sounding redundant, let's recap: shell scripts are simply files containing a list of
commands to be executed in sequence. Now let's go a bit further and look at a shell script, line-
by-line.

Any Korn shell script should contain this line at the very beginning:

#!/usr/bin/ksh

As you probably already know, the # sign marks anything that follows it on the line as a
comment - anything coming after it won't be interpreted or processed as part of the script. But,
when the # character is followed by a ! (commonly called "bang"), the meaning changes. The line
above specifies that the Korn shell will be (or should be) executing the script. If nothing is
specified, the system will attempt to execute the script using whatever its default shell type is,
not necessarily a Korn shell. Since the Korn shell supports some commands that other shells do
not, this can sometimes cause a problem. To be valid, this line must be on the very first line of
the script.

Shell scripts are often used to automate day-to-day tasks. For example, a system administrator
might use the following script, named diskuse here, to keep track of disk space usage:

#!/usr/bin/ksh
# diskuse
# Shows disk usage in blocks for /home
cd /var/log
cp disk.log disk.log.0
cd /home
du -sk * > /var/log/disk.log
cat /var/log/disk.log

Shown again - but this time with annotation - the script's processing steps are clear:

#!/usr/bin/ksh
# SCRIPT NAME: diskuse
# SCRIPT PURPOSE: Shows disk usage in blocks for /home

# change to the directory where disk.log resides


cd /var/log

# make a copy of disk.log


cp disk.log disk.log.0

# change to the target directory


cd /home

# run the du -sk * command on all files


# in /home and redirect the output
# to /var/log/disk.log
du -sk * > /var/log/disk.log

# display the output of the du -sk *


# command to standard output
cat /var/log/disk.log

It's not a good idea to hard-code pathnames into your scripts like we did in the previous
example. We specified /var/log as the target directory several times, but what if the location of
the files changed? In a short script like this one, the impact is not great. However, some scripts
can be hundreds of lines long, creating a maintenance headache if files are moved. A way
around this is to create a variable to take the place of the full pathname, such as:

LOGDIR=/var/log

The fourth line of the script would change from:

cp disk.log disk.log.0
to:

cp ${LOGDIR}/disk.log ${LOGDIR}/disk.log.0

Then, if the locations of disk.log changes in the future, you would only have to change the
variable definition to update the script. Also note that since you are defining the pathname with
the LOGDIR variable, the cd /var/log line in the script is unnecessary. Likewise, the du -sk * >
/var/log/disk.log and cat /var/log/disk.log lines can substitute ${LOGDIR} for /var/log.
Basic Shell Scripts:

Output to screen

#!/bin/bash
# Simple output script

echo "Hello World"

Defining Tasks

#!/bin/bash
# Define small tasks

whoami
echo
pwd
echo
hostname
echo
ls -ltr
echo

Defining variables

#!/bin/bash
# Example of defining variables

a=Imran
b=Afzal
c=’Linux class’

echo "My first name is $a"


echo "My surname is $b"
echo ‘My surname is $c’

Read Input

#!/bin/bash
# Read user input

echo "What is your first name?"


read a
echo

echo "What is your last name?"


read b
echo

echo Hello $a $b

Scripts to run commands within

#!/bin/bash
# Script to run commands within

clear
echo "Hello `whoami`"
echo
echo "Today is `date`"
echo
echo "Number of user login: `who | wc -l `"
echo

Read input and perform a task

#!/bin/bash
# This script will rename a file

echo Enter the file name to be renamed


read oldfilename

echo Enter the new file name


read newfilename

mv $oldfilename $newfilename
echo The file has been renamed as $newfilename
for loop Scripts:

Simple for loop output

#!/bin/bash

for i in 1 2 3 4 5
do
echo "Welcome $i times"
done

Simple for loop output

#!/bin/bash

for i in eat run jump play


do
echo See Imran $i
done

for loop to create 5 files named 1-5

#!/bin/bash

for i in {1..5}
do
touch $i
done

for loop to delete 5 files named 1-5

#!/bin/bash

for i in {1..5}
do
rm $i
done

Specify days in for loop

#!/bin/bash

i=1
for day in Mon Tue Wed Thu Fri
do
echo "Weekday $((i++)) : $day"
done

List all users one by one from /etc/passwd file

#!/bin/bash

i=1
for username in `awk -F: '{print $1}' /etc/passwd`
do
echo "Username $((i++)) : $username"
done
do-while Script

Script to run for a number of times

#!/bin/bash

c=1
while [ $c -le 5 ]
do
echo "Welcone $c times"
(( c++ ))
done

Script to run for a number of seconds

#!/bin/bash

count=0
num=10
while [ $count -lt 10 ]
do
echo
echo $num seconds left to stop this process $1
echo
sleep 1

num=`expr $num - 1`
count=`expr $count + 1`
done
echo
echo $1 process is stopped!!!
echo
If-then Scripts:

Check the variable

#!/bin/bash

count=100
if [ $count -eq 100 ]
then
echo Count is 100
else
echo Count is not 100
fi

Check if a file error.txt exist

#!/bin/bash

clear
if [ -e /home/iafzal/error.txt ]

then
echo "File exist"
else
echo "File does not exist"
fi

Check if a variable value is met

#!/bin/bash

a=`date | awk '{print $1}'`

if [ "$a" == Mon ]

then
echo Today is $a
else
echo Today is not Monday
fi
Check the response and then output

#!/bin/bash

clear
echo
echo "What is your name?"
echo
read a
echo

echo Hello $a sir


echo

echo "Do you like working in IT? (y/n)"


read Like
echo

if [ "$Like" == y ]
then
echo You are cool

elif [ "$Like" == n ]
then
echo You should try IT, it’s a good field
echo
fi

Other If statements

If the output is either Monday or Tuesday


if [ “$a” = Monday ] || [ “$a” = Tuesday ]

Test if the error.txt file exist and its size is greater than zero
if test -s error.txt

if [ $? -eq 0 ] If input is equal to zero (0)


if [ -e /export/home/filename ] If file is there
if [ "$a" != "" ] If variable does not match
if [ error_code != "0" ] If file not equal to zero (0)

Comparisons:
-eq equal to for numbers
== equal to for letters
-ne not equal to
!== not equal to for letters
-lt less than
-le less than or equal to
-gt greater than
-ge greater than or equal to
File Operations:
-s file exists and is not empty
-f file exists and is not a directory
-d directory exists
-x file is executable
-w file is writable
-r file is readable
case Scripts:

#!/bin/bash

echo
echo Please chose one of the options below
echo
echo 'a = Display Date and Time'
echo 'b = List file and directories'
echo 'c = List users logged in'
echo 'd = Check System uptime'
echo

read choices

case $choices in

a) date;;
b) ls;;
c) who;;
d) uptime;;
*) echo Invalid choice - Bye.

esac

This script will look at your current day and tell you the state of the
backup

#!/bin/bash

NOW=$(date +"%a")
case $NOW in
Mon)
echo "Full backup";;
Tue|Wed|Thu|Fri)
echo "Partial backup";;
Sat|Sun)
echo "No backup";;
*) ;;
esac
Aliases

 The alias command allows you to define new commands. Useful for creating shortcuts
for longer commands. The syntax is.

alias alias-name=executed_command

Some examples:

alias m=more
alias rm="rm -i"
alias h="history -r | more"

To view all current aliases:


alias

To remove a previously defined alias:


unalias alias_name
System Run Level

A run level is a preset operating state on a Unix-like operating system.

A system can be booted into (i.e., started up into) any of several runlevels, each of which is
represented by a single digit integer. Each runlevel designates a different system configuration
and allows access to a different combination of processes (i.e., instances of executing programs).

The are differences in the runlevels according to the operating system. Seven runlevels are
supported in the standard Linux kernel (i.e., core of the operating system). They are:

0 - System halt; no activity, the system can be safely powered down.

1 - Single user; rarely used.

2 - Multiple users, no NFS (network filesystem); also used rarely.

3 - Multiple users, command line (i.e., all-text mode) interface; the standard runlevel for most
Linux-based server hardware.

4 - User-definable

5 - Multiple users, GUI (graphical user interface); the standard runlevel for most Linux-based
desktop systems.

6 - Reboot; used when restarting the system.

By default Linux boots either to runlevel 3 or to runlevel 5. The former permits the system to
run all services except for a GUI. The latter allows all services including a GUI.

In addition to the standard runlevels, users can modify the preset runlevels or even create new
ones if desired. Runlevels 2 and 4 are usually used for user defined runlevels.

The program responsible for altering the runlevel is init, and it can be called using the telinit
command. For example, changing from runlevel 3 to runlevel 5, which allows the GUI to be
started, can be accomplished by the root (i.e., administrative) user by issuing the following
command:
telinit 5

Booting into a different runlevel can help solve certain problems. For example, if a change made
in the X Window System configuration on a machine that has been set up to boot into a GUI has
rendered the system unusable, it is possible to temporarily boot into a console (i.e., all-text
mode) runlevel (i.e., runlevels 3 or 1) in order to repair the error and then reboot into the GUI.
The X Window System is a widely used system for managing GUIs on single computers and on
networks of computers.

Likewise, if a machine will not boot due to a damaged configuration file or will not allow
logging in because of a corrupted /etc/passwd file (which stores user names and other data
about users) or because of a forgotten password, the problem can solved by first booting into
single-user mode (i.e. runlevel 1).

The runlevel command can be used to find both the current runlevel and the previous runlevel
by merely typing the following and pressing the Enter key:

/sbin/runlevel

The runlevel executable file (i.e., the ready-to-run form of the program) is typically located in
the /sbin directory, which contains mostly administrative tools and which by default is not in
the user's PATH (i.e., the list of directories in which the system searches for programs). Thus, it
is usually necessary to type the full path of the command as shown above rather than just the
name of the command itself.

The default runlevel for a system is specified in the /etc/inittab file, which will contain an entry
such as id:3:initdefault: if the system starts in runlevel 3, or id:5:initdefault: if it starts in runlevel
5. This file can be easily (and safely) read with a command such as cat, i.e.,

cat /etc/inittab

As an alternative to telinit, the runlevel into which the system boots can be changed by
modifying /etc/inittab manually with a text editor. However, it is generally easier and safer (i.e.,
less chance of accidental damage to the file) to use telinit. It is always wise to make a backup
copy of /etc/inittab or any other configuration file before attempting to modify it manually.
Partitioning a Disk

Linux
# fdisk /dev/emcpowerp OR fdisk /dev/sdb
m  n  p  1  enter  enter  w

Then format the new partition


# mkfs –t ext2 /dev/sdb1
OR

# mkfs.ext3 /dev/emcpowerL# OR mkfs.ext3 /dev/sdb1


Mount Disk Partitions:

 First make sure the slice has no data

Create directories that will be mounted on a slice

e.g:

# mkdir /rocket
# cd /rocket

# mkdir IFMX_ROCKET
# mkdir ROCKET_DATA

Mount slice to the directory


e.g:
Linux
# mount /dev/sdb1 /rocket/ROCKET_DATA
# mount /dev/sdb2 /rocket/IFMX_ROCKET

Add these entries to /etc/fstab file so the system can mount on boot up
# cp /etc/fstab /etc/fstab.bak

vi /etc/fstab and add the following lines


/dev/sdb1 /rocket/ROCKET_DATA ext4 defaults 1 1
/dev/sdb2 /rocket/IFMX_ROCKET ext4 defaults 1 1
 fdisk /dev/sdc
 n
 p
 Enter for first sector
 Enter for last sector
 p = print the partition table
 t = change a partition's system id
 L = type L to list all codes
 8e = Partition type from Linux to Linux LVM
 w

 Create Physical Volume (PV) = pvcreate /dev/sdc1


 Verify physical volume = pvdisplay

 Create Volume Group (VG) = vgcreate oracle_vg /dev/sdc1


 Verify Volume group = vgdisplay oracle_vg

 Create Logical Volumes (LV) = lvcreate –n oracle_lv –size 2G oracle_vg


 Verify logical volumes = lvdisplay

 Format Logical Volumes = mkfs.xfs /dev/oracle_vg/oracle_lv

 Create a new directory = mkdir /oracle

 Mount the new file system = mount /dev/oracle_vg/oracle_lv /oracle

 Verify = df –h
To extend filesystem of a Linux VM using LVM

Go to your virtualization product (VMWare or Oracle Virtual Box)


 Increase the disk space to desired number and then click ok

Now go to your Linux VM


 Reboot the VM to have the system re-scan the newly added disk Or
 cd /sys/class/scsi_disk/2:0:0:0
 echo '1' > device/rescan

 fdisk –l (To make sure the disk is increased)

 Create a new partition


o fdisk /dev/sdc
o n (for new partition)
o p (for primary partition)
o 2 (partition number, 2 or the new partition)
o Enter
o Enter
o t (Label the new partition)
o 3 (Pick default value)
o 8e (This will make the filesystem as LVM)
o w (Write)
o # reboot or init 6

Note: The above procedure will create /dev/sdc2 partition

 Extend the LVM group


o pvdisplay (To see which group associated with which disk)
o pvs (Info about physical volumes
o vgdisplay oracle_vg (oracle_vg is the group name or you can simply run vgdisplay)
On vgdisplay you will notice Free PE / Size at the bottom

o pvcreate /dev/sdc2 (Initialize partition for use by LVM)


o vgextend oracle_vg /dev/sdc2 (# = whichever partition was created above)
o Run vgdisplay oracle_vg
check (Free PE / Size). The second column is the right column as free. If it is in G convert
that into M. e.g. 1G = 1024M
o lvextend –L+1024M /dev/mapper/oracle_vg-oracle_lv
o resize2fs /dev/mapper/oracle_vg-oracle_lv
o OR
o xfs_growfs /dev/mapper/oracle_vg-oracle_lv
Use a File for Additional Swap Space:

What is swap? – CentOS.org


Swap space in Linux is used when the amount of physical memory (RAM) is full. If the
system needs more memory resources and the RAM is full, inactive pages in memory are
moved to the swap space. While swap space can help machines with a small amount of
RAM, it should not be considered a replacement for more RAM. Swap space is located on
hard drives, which have a slower access time than physical memory

Recommended swap size = Twice the size of RAM


Lets say,
M = Amount of RAM in GB, and S = Amount of swap in GB, then

If M < 2
then S = M *2
Else S=M+2

Commands
dd
mkswap
swapon or swapoff

Steps to Create Swap Space from Existing Disk:


If you don’t have any additional disks, you can create a file somewhere on your filesystem, and
use that file for swap space.

The following dd command example creates a swap file with the name “newswap” under /
directory with a size of 1024MB (1.0GB).

# dd if=/dev/zero of=/newswap bs=1M count=1024


Where
if = read from FILE instead of stdin
of = write to FILE instead of stdout
bs = read and write BYTES at a time
count = total size of the file

Change the permission of the swap file so that only root can access it.
# chmod go-r /newswap OR
# chmod 0600 /newswap

Make this file as a swap file using mkswap command.


# mkswap /newswap

Enable the newly created newswap.


# swapon /newswap

To make this swap file available as a swap area even after the reboot, add the following line to
the /etc/fstab file.
# cat /etc/fstab
/newswap swap swap defaults 0 0

Verify whether the newly created swap area is available for your use.
# swapon –s
# free –h

If you don’t want to reboot to verify whether the system takes all the swap space mentioned in
the /etc/fstab, you can do the following, which will disable and enable all the swap partition
mentioned in the /etc/fstab
# swapoff -a
# swapon -a
Overview of systemd for RHEL 7
The systemd system and service manager is responsible for controlling how services are started,
stopped and otherwise managed on Red Hat Enterprise Linux 7 systems. By offering on-demand
service start-up and better transactional dependency controls, systemd dramatically reduces start
up times. As a systemd user, you can prioritize critical services over less important services.

Although the systemd process replaces the init process (quite literally, /sbin/init is now a
symbolic link to /usr/lib/systemd/systemd) for starting services at boot time and changing
runlevels, systemd provides much more control than the init process does while still supporting
existing init scripts. Here are some examples of the features of systemd:

 Logging: From the moment that the initial RAM disk is mounted to start the Linux kernel
to final shutdown of the system, all log messages are stored by the new systemd journal.
Before the systemd journal existed, initial boot messages were lost, requiring that you try
to watch the screen as messages scrolled by to debug boot problems.
Now, all system messages come in on a single stream and are stored in the /run directory.
Messages can then be consumed by the rsyslog facility (and redirected to traditional log
files in the /var/log directory or to remote log servers) or displayed using the journalctl
command across a variety of attributes.
 Dependencies: With systemd, an explicit set of dependencies can be defined for each
service, instead of being implied by boot order. This allows a service to start at any point
that its dependencies are met. In this way, many services can start at the same time,
making the boot process faster. Likewise, complex sets of dependencies can be set up, so
the exact requirements of a service (such as storage availability or file system checking)
can be met before a service starts.
 Cgroups: Services are identified by Cgroups, which allow every component of a service
to be managed. For example, the older System V init scripts would start a service by
launching a process which itself might start other child processes. When the service was
killed, it was hoped that the parent process would do the right thing and kill its children.
By using Cgroups, all components of a service have a tag that can be used to make sure
that all of those components are properly started or stopped.
 Activating services: Services don't just have to be always running or not running based
on runlevel, as they were previous to systemd. Services can now be activated based on
path, socket, bus, timer, or hardware activation. Likewise, because systemd can set up
sockets, if a process handling communications goes away, the process that starts up in its
place can pick up the next message from the socket. To the clients using the service, it
can look as though the service continued without interruption.
 More than services: Instead of just managing services, systemd can manage several
different unit types. These unit types include:
o Devices: Create and use devices.
o Mounts and automounts: Mount file systems upon request or automount a file
system based on a request for a file or directory within that file system.
o Paths: Check the existence of files or directories or create them as needed.
o Services: Start a service, which often means launching a service daemon and
related components.
o Slices: Divide up computer resources (such as CPU and memory) and apply them
to selected units.
o Snapshots: Take snapshots of the current state of the system.
o Sockets: Set up sockets to allow communication paths to processes that can
remain in place, even if the underlying process needs to restart.
o Swaps: Create and use swap files or swap partitions.
o Targets: Manage a set of services under a single unit, represented by a target
name rather than a runlevel number.
o Timers: Trigger actions based on a timer.
 Resource management
o The fact that each systemd unit is always associated with its own cgroup lets you
control the amount of resources each service can use. For example, you can set a
percent of CPU usage by service which can put a cap on the total amount of CPU
that service can use -- in other words, spinning off more processes won't allow
more resources to be consumed by the service. Prior to systemd, nice levels were
often used to prevent processes from hogging precious CPU time. With systemd's
use of cgroups, precise limits can be set on CPU and memory usage, as well as
other resources.
o A feature called slices lets you slice up many different types of system resources
and assign them to users, services, virtual machines, and other units. Accounting
is also done on these resources, which can allow you to charge customers for their
resource usage.

Booting RHEL 7 with systemd


When you boot a standard X86 computer to run RHEL 7, the BIOS boots from the selected
medium (usually a local hard disk) and the boot loader (GRUB2 for RHEL 7) starts the RHEL 7
kernel and initial RAM disk. After that, the systemd process takes over to initialize the system
and start all the system services.

Although there is not a strict order in which services are started when a RHEL 7 (systemd)
system is booted, there is a structure to the boot process. The direction that the systemd process
takes at boot time depends on the default.target file. A long listing of the default.target file
shows you which target starts when the system boots:

# cd /etc/systemd/system
# ls -l default.target
lrwxrwxrwx. 1 root root 16 Aug 23 19:18 default.target ->
/lib/systemd/system/graphical.target

You can see here that the graphical.target (common for desktop systems or servers with
graphical interfaces) is set as the default.target (via a symbolic link). To understand what
targets, services and other units start up with the graphical target, it helps to work backwards, as
systemd does, to build the dependency tree. Here's what to look for:
 graphical.target: The /lib/systemd/system/graphical.target file includes these lines:
 Requires=multi-user.target
 Wants=display-manager.service
 Conflicts=rescue.service rescue.target
 After=multi-user.target rescue.service rescue.target display-
manager.service
 AllowIsolate=yes

This tells systemd to start everything in the multi-user.target before starting the graphical
target. Once that's done, the "Wants" entry tells systemd to start the display-
manager.service service (/etc/systemd/system/display-manager.service), which runs
the GNOME display manager (/usr/sbin/gdm).

 multi-user.target: The /usr/lib/systemd/system/multi-user.target starts the services


you would expect in a RHEL multi-user mode. The file contains the following line:

Requires=basic.target

This tells systemd to start everything in the /usr/lib/systemd/system/basic.target target


before starting the other multi-user services. After that, for the multi-user.target, all units
(services, targets, etc.) in the /etc/systemd/system/multi-user.target.wants and
/usr/lib/systemd/system/multi-user.target.wants directories are started. When you
enable a service, a symbolic link is placed in the /etc/systemd/system/multi-
user.target.wants directory. That directory is where you will find links to most of the
services you think of as starting in multi-user mode (printing, cron, auditing, SSH, and so
on). Here is an example of the services, paths, and targets in a typical multi-
user.target.wants directory:

# cd /etc/systemd/system/multi-user.target.wants
abrt-ccpp.service hypervkvpd.service postfix.service
abrtd.service hypervvssd.service remote-fs.target
abrt-oops.service irqbalance.service rhsmcertd.service
abrt-vmcore.service ksm.service rngd.service
abrt-xorg.service ksmtuned.service rpcbind.service
atd.service libstoragemgmt.service rsyslog.service
auditd.service libvirtd.service smartd.service
avahi-daemon.service mdmonitor.service sshd.service
chronyd.service ModemManager.service sysstat.service
crond.service netcf-transaction.service tuned.service
cups.path nfs.target vmtoolsd.service

 basic.target: The /usr/lib/systemd/system/basic.target file starts the basic services


associated with all running RHEL 7 systems. The file contains the following line:

Requires=sysinit.target

This points systemd to the /usr/lib/systemd/system/sysinit.target, which must start


before the basic.target can continue. The basic.target target file starts the firewalld and
microcode services from the /etc/systemd/system/basic.target.wants directory and
services for SELinux, kernel messages, and loading modules from the
/usr/lib/systemd/system/basic.target.wants directory.

 sysinit.target: The /usr/lib/systemd/system/sysinit.target file starts system initialization


services, such as mounting file systems and enabling swap devices. The file contains the
following line:

Wants=local-fs.target swap.target

Besides mounting file systems and enabling swap devices, the sysinit.target starts targets,
services, and mounts based on units contained in the
/usr/lib/systemd/system/sysinit.target.wants directory. These units enable logging, set
kernel options, start the udevd daemon to detect hardware, and allow file system
decryption, among other things. The /etc/systemd/system/sysinit.target.wants directory
contains services that start iSCSI, multipath, LVM monitoring and RAID services.

 local-fs.target: The local-fs.target is set to run after the local-fs-pre.target target, based
on this line:

After=local-fs-pre.target

There are no services associated with the local-fs-pre.target target (you could add some to
a "wants" directory if you like). However, units in the /usr/lib/systemd/system/local-
fs.target.wants directory import the network configuration from the initramfs, run a file
system check (fsck) on the root file system when necessary, and remounting the root file
system (and special kernel file systems) based on the contents of the /etc/fstab file.

Although the boot process is built by systemd in the order just shown, it actually runs, in general,
in the opposite order. As a rule, a target on which another target is dependent must be running
before the units in the first target can start. To see more details about the boot process, see the
bootup man page (man 7 bootup).

Using the systemctl Command


The most important command for managing services on a RHEL 7 (systemd) system is the
systemctl command. Here are some examples of the systemctl command (using the nfs-server
service as an example) and a few other commands that you may find useful:

 Checking service status: To check the status of a service (for example, nfs-
server.service), type the following:
 # systemctl status nfs-server.service
 nfs-server.service - NFS Server
 Loaded: loaded (/usr/lib/systemd/system/nfs-server.service;
disabled)
 Active: active (exited) since Wed 2014-03-19 10:29:40 MDT; 57s ago
 Process: 5206 ExecStartPost=/usr/libexec/nfs-utils/scripts/nfs-
server.postconfig (code=exited, status=0/SUCCESS)
 Process: 5191 ExecStart=/usr/sbin/rpc.nfsd $RPCNFSDARGS $RPCNFSDCOUNT
(code=exited, status=0/SUCCESS)
 Process: 5188 ExecStartPre=/usr/sbin/exportfs -r (code=exited,
status=0/SUCCESS)
 Process: 5187 ExecStartPre=/usr/libexec/nfs-utils/scripts/nfs-
server.preconfig (code=exited, status=0/SUCCESS)
 Main PID: 5191 (code=exited, status=0/SUCCESS)
 CGroup: /system.slice/nfs-server.service

 Mar 19 10:29:40 localhost.localdomain systemd[1]: Starting NFS
Server...
 Mar 19 10:29:40 localhost.localdomain systemd[1]: Started NFS Server.
 Stopping a service: To stop a service, use the stop option as follows:
 # systemctl stop nfs-server.service
 Starting a service: To start a service, use the start option as follows:
 # systemctl start nfs-server.service
 Enabling a service: To enable a service so it starts automatically at boot time, type the
following:
 # systemctl enable nfs-server.service
 Disable a service: To disable a service so it doesn't start automatically at boot time, type
the following:
 # systemctl disable nfs-server.service
 Listing dependencies: To see dependencies of a service, use the list-dependencies
option, as follows:
 # systemctl list-dependencies nfs-server.service
 nfs-server.service
 ├─nfs-idmap.service
 ├─nfs-mountd.service
 ├─nfs-rquotad.service
 ├─proc-fs-nfsd.mount
 ├─rpcbind.service
 ├─system.slice
 ├─var-lib-nfs-rpc_pipefs.mount
 └─basic.target
 ├─alsa-restore.service
 ├─alsa-state.service
 ...
 Listing units in targets: To see what services and other units (service, mount, path,
socket, and so on) are associated with a particular target, type the following:
 # systemctl list-dependencies multi-user.target
 multi-user.target
 ├─abrt-ccpp.service
 ├─abrt-oops.service
 ├─abrt-vmcore.service
 ├─abrt-xorg.service
 ├─abrtd.service
 ├─atd.service
 ├─auditd.service
 ├─avahi-daemon.service
 ├─brandbot.path
 ├─chronyd.service
 ├─crond.service
 ...
 List specific types of units: Use the following command to list specific types of units (in
these examples, service and mount unit types):
 # systemctl list-units --type service
 UNIT LOAD ACTIVE SUB DESCRIPTION
 abrt-ccpp.service loaded active exited Install ABRT
coredump hook
 abrt-oops.service loaded active running ABRT kernel log
watcher
 abrt-xorg.service loaded active running ABRT Xorg log
watcher
 abrtd.service loaded active running ABRT Automated Bug
Reporting
 accounts-daemon.service loaded active running Accounts Service
 ...

 # systemctl list-units --type mount
 UNIT LOAD ACTIVE SUB DESCRIPTION
 -.mount loaded active mounted /
 boot.mount loaded active mounted /boot
 dev-hugepages.mount loaded active mounted Huge Pages File
System
 dev-mqueue.mount loaded active mounted POSIX Message Queue
File Syst
 mnt-repo.mount loaded active mounted /mnt/repo
 proc-fs-nfsd.mount loaded active mounted RPC Pipe File System
 run-user-1000-gvfs.mount loaded active mounted /run/user/1000/gvfs
 ...
 Listing all units: To list all units installed on the system, along with their current states,
type the following:
 # systemctl list-unit-files
 UNIT FILE STATE
 proc-sys-fs-binfmt_misc.automount static
 dev-hugepages.mount static
 dev-mqueue.mount static
 proc-sys-fs-binfmt_misc.mount static
 ...
 arp-ethers.service disabled
 atd.service enabled
 auditd.service enabled
 ...
 View service processes with systemd-cgtop: To view processes associated with a
particular service (cgroup), you can use the systemd-cgtop command. Like the top
command (which sorts processes by such things as CPU and memory usage), systemd-
cgtop lists running processes based on their service (cgroup label). Once systemd-cgtop
is running, you can press keys to sort by memory (m), CPU (c), task (t), path (p), or I/O
load (i). Here is an example:
 # systemd-cgtop
 Recursively view cgroup contents: To output a recursive list of cgroup content, use the
systemd-cgls command:
 # systemd-cgls
 ├─user.slice
 │ ├─user-1000.slice
 │ │ ├─session-5.scope
 │ │ │ ├─2661 gdm-session-worker [pam/gdm-password]
 │ │ │ ├─2672 /usr/bin/gnome-keyring-daemon --daemonize --login
 │ │ │ ├─2674 gnome-session --session gnome-classic
 │ │ │ ├─2682 dbus-launch --sh-syntax --exit-with-session
 │ │ │ ├─2683 /bin/dbus-daemon --fork --print-pid 4 --print-address 6 --
session
 │ │ │ ├─2748 /usr/libexec/gvfsd
 ...
 View journal (log) files: Using the journalctl command you can view messages from
the systemd journal. Using different options you can select which group of messages to
display. The journalctl command also supports tab completion to fill in fields for which
to search. Here are some examples:
 # journalctl -h View help for the command
 # journalctl -k View kernel messages from current boot
 # journalctl -f Follow journal messages (like tail -f)
 # journalctl -u NetworkManager View messages for specific unit (can
tab complete)

Comparing systemd to Traditional init


Some of the benefits of systemd over the traditional System V init facility include:

 systemd never loses initial log messages


 systemd can respawn daemons as needed
 systemd records runtime data (i.e., captures stdout/stderr of processes)
 systemd doesn't lose daemon context during runtime
 systemd can kill all components of a service cleanly

Here are some details of how systemd compares to pre-RHEL 7 init and related commands:

 System startup: The systemd process is the first process ID (PID 1) to run on RHEL 7
system. It initializes the system and launches all the services that were once started by the
traditional init process.
 Managing system services: For RHEL 7, the systemctl command replaces service and
chkconfig. Prior to RHEL 7, once RHEL was up and running, the service command was
used to start and stop services immediately. The chkconfig command was used to
identify at which run levels a service would start or stop automatically.
Although you can still use the service and chkconfig commands to start/stop and
enable/disable services, respectively, they are not 100% compatible with the RHEL 7
systemctl command. For example, non-standard service options, such as those that start
databases or check configuration files, may not be supported in the same way for RHEL 7
services.
 Changing runlevels: Prior to RHEL 7, runlevels were used to identify a set of services
that would start or stop when that runlevel was requested. Instead of runlevels, systemd
uses the concept of targets to group together sets of services that are started or stopped. A
target can also include other targets (for example, the multi-user target includes an nfs
target).
There are systemd targets that align with the earlier runlevels. However the point of
targets is not to necessarily imply a level of activity (for example, runlevel 3 implied
more services were active than runlevel 1). Instead targets just represent a group of
services, so it's appropriate that there are many more targets available than there are
runlevels. The following list shows how systemd targets align with traditional runlevels:
 Traditional runlevel New target name Symbolically linked to...
 Runlevel 0 | runlevel0.target -> poweroff.target
 Runlevel 1 | runlevel1.target -> rescue.target
 Runlevel 2 | runlevel2.target -> multi-user.target
 Runlevel 3 | runlevel3.target -> multi-user.target
 Runlevel 4 | runlevel4.target -> multi-user.target
 Runlevel 5 | runlevel5.target -> graphical.target
 Runlevel 6 | runlevel6.target -> reboot.target
 Default runlevel: The default runlevel (previously set in the /etc/inittab file) is now
replaced by a default target. The location of the default target is
/etc/systemd/system/default.target, which by default is linked to the multi-user target.
 Location of services: Before systemd, services were stored as scripts in the /etc/init.d
directory, then linked to different runlevel directories (such as /etc/rc3.d, /etc/rc5.d, and
so on). Services with systemd are named something.service, such as firewalld.service,
and are stored in /lib/systemd/system and /etc/systemd/system directories. Think of the
/lib files as being more permanent and the /etc files as the place you can modify
configurations as needed.
When you enable a service in RHEL 7, the service file is linked to a file in the
/etc/systemd/system/multi-user.target.wants directory. For example, if you run
systemctl enable fcoe.service a symbolic link is created from
/etc/systemd/system/multi-user.target.wants/fcoe.service that points to
/lib/systemd/system/fcoe.service to cause the fcoe.service to start at boot time.
Also, the older System V init scripts were actual shell scripts. The systemd files tasked to
do the same job are more like .ini files that contain the information needed to launch a
service.
 Configuration files: The /etc/inittab file was used by the init process in RHEL 6 and
earlier to point to the initialization files (such as /etc/rc.sysinit) and runlevel service
directories (such as /etc/rc5.d) needed to start up the system. Changes to those services
was done in files (usually named after the service) in the /etc/sysconfig directory. For
systemd in RHEL 7, there are still files in /etc/sysconfig used to modify how services
behave. However, services can be modified by adding files to the /etc/systemd directory
to override the permanent service files in the /lib/systemd directories.
Transitioning to systemd
If you are used to using the init process and System V init scripts prior to RHEL 7, there are a
few things you should know about transitioning to systemd:

 Using RHEL 6 commands: For the time being, you can use commands such as service,
chkconfig, runlevel, and init as you did in RHEL 6. They will cause appropriate systemd
commands to run, with similar, if not exactly the same, results. Here are some examples:
 # service cups restart
 Redirecting to /bin/systemctl restart cups.service
 # chkconfig cups on
 Note: Forwarding request to 'systemctl enable cups.service'.
 System V init Scripts: Although not encouraged, System V init scripts are still
supported. There are still some services in RHEL 7 that are implemented in System V init
scripts. To see System V init scripts that are available on your system and the runlevels
on which they start, use the chkconfig command as follows:
 # chkconfig --list
 ...
 iprdump 0:off 1:off 2:on 3:on 4:on 5:on 6:off
 iprinit 0:off 1:off 2:on 3:on 4:on 5:on 6:off
 iprupdate 0:off 1:off 2:on 3:on 4:on 5:on 6:off
 netconsole 0:off 1:off 2:off 3:off 4:off 5:on 6:off
 network 0:off 1:off 2:on 3:on 4:on 5:on 6:off
 rhnsd 0:off 1:off 2:on 3:on 4:on 5:on 6:off
 ...

Using chkconfig, however, will not show you the whole list of services on your system. To see
the systemd-specific services, run the systemctl list-unit-files command, as described earlier.
Customizing motd
You can have the MOTD (message of the day) display messages that may be unique to the machine. One way
to do this is to create a script that runs when a user logs on to the system.
First, create a script in /etc/profile.d = touch motd.sh
Make it executable = chmod a+x motd.sh (make sure it has the extension as .sh)

#!/bin/bash
#
echo -e "
##################################
#
# Welcome to `hostname`
# This system is running `cat /etc/redhat-release`
# kernel is `uname -r`
#
# You are logged in as `whoami`
#
##################################
"

Next, edit /etc/ssh/sshd_config as follows:

PrintMotd no

This will disable motd

Now restart the sshd service


systemctl restart sshd.service
That’s it! When you log in, you’d see something similar to:

#####################################
#
# Welcome to MyFirstLinuxVM
# This system is running CentOS Linux release 7.5.1804 (Core)
# kernel is 3.10.0-862.el7.x86_64
#
# You are logged in as iafzal
#
#####################################
Steps for NFS Server Configuration

• Install NFS packages

# yum install nfs-utils libnfsidmap (most likely they are installed)

• Once the packages are installed, enable and start NFS services

# systemctl enable rpcbind

# systemctl enable nfs-server

# systemctl start rpcbind, nfs-server, rpc-statd, nfs-idmapd

• Create NFS share directory and assign permissions

# mkdir /mypretzels

# chmod a+rwx /mypretzels

• Modify /etc/exports file to add new shared filesystem

# /mypretzels 192.168.12.7(rw,sync,no_root_squash) = for only 1 host

# /mypretzels *(rw,sync,no_root_squash) = for everyone

• Export the NFS filesystem

# exportfs -rv

• Stop and disable firewalld

# systemctl stop firewalld

# systemctl disable firewalld

Steps for NFS Client Configuration

• Install NFS packages

# yum install nfs-utils rpcbind

• Once the packages are installed enable and start rpcbind service

# systemctl rpcbind start

• Make sure firewalld or iptables stopped (if running)

# ps –ef | egrep “firewall|iptable”

• Show mount from the NFS server


# showmount -e 192.168.1.5 (NFS Server IP)

• Create a mount point

# mkdir /mnt/kramer

• Mount the NFS filesystem

# mount 192.168.1.5:/mypretzels /mnt/kramer

• Verify mounted filesystem

# df –h

• To unmount

# umount /mnt/kramer
Red Hat Enterprise Linux 7

Storage Administration Guide

Deploying and configuring single-node storage in Red Hat Enterprise Linux 7

Last Updated: 2018-08-28


Red Hat Enterprise Linux 7 Storage Administration Guide
Deploying and configuring single-node storage in Red Hat Enterprise Linux 7

Milan Navrátil
Red Hat Customer Content Services

Jacquelynn East
Red Hat Customer Content Services

Don Domingo
Red Hat Customer Content Services

Josef Bacik
Server Development Kernel File System
[email protected]
Disk Quotas

Kamil Dudka
Base Operating System Core Services - BRNO
[email protected]
Access Control Lists

Hans de Goede
Base Operating System Installer
[email protected]
Partitions

Harald Hoyer
Engineering Software Engineering
[email protected]
File Systems

Dennis Keefe
Base Operating Systems Kernel Storage
[email protected]
VDO

Doug Ledford
Server Development Hardware Enablement
[email protected]
RAID

Daniel Novotny
Base Operating System Core Services - BRNO
[email protected]
The /proc File System

Nathan Straz
Quality Engineering QE - Platform
[email protected]
Legal Notice
GFS2

Copyright
Andy Walsh© 2018 Red Hat, Inc.
Base Operating Systems Kernel Storage
This document is licensed by Red Hat under the Creative Commons Attribution-ShareAlike 3.0
[email protected]
Unported
VDO License. If you distribute this document, or a modified version of it, you must provide
attribution to Red Hat, Inc. and provide a link to the original. If the document is modified, all Red Hat
David Wysochanski
trademarks must be removed.
Server Development Kernel Storage
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert,
[email protected]
Section
LVM/LVM2 4d of CC-BY-SA to the fullest extent permitted by applicable law.

Red Hat,Christie
Michael Red Hat Enterprise Linux, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity
logo, and RHCE are trademarks
Server Development of Red Hat, Inc., registered in the United States and other
Kernel Storage
countries.
[email protected]
Online Storage
Linux ® is the registered trademark of Linus Torvalds in the United States and other countries.
Sachin Prabhu
Java ® is a registered trademark of Oracle and/or its affiliates.
Software Maintenance Engineering
[email protected]
XFS ® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States
NFS
and/or other countries.
Rob
MySQL Evers
® is a registered trademark of MySQL AB in the United States, the European Union and
Server Development
other countries. Kernel Storage
[email protected]
Online Storage
Node.js ® is an official trademark of Joyent. Red Hat Software Collections is not formally related to
or endorsed by the official Joyent Node.js open source or commercial project.
David Howells
Server
The OpenStack
Development
® WordHardware
Mark and
Enablement
OpenStack logo are either registered trademarks/service marks
[email protected]
or trademarks/service marks of the OpenStack Foundation, in the United States and other countries
FS-Cache
and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or
sponsored by the OpenStack Foundation, or the OpenStack community.
David Lehman
Base
All other
Operating
trademarks
Systemare Installer
the property of their respective owners.
[email protected]
Storage configuration during installation
Abstract
Jeff Moyer
This guide
Server provides instructions
Development Kernel File on how to effectively manage storage devices and file systems on
System
Red Hat Enterprise
[email protected] Linux 7. It is intended for use by system administrators with basic to
intermediate knowledge of Red Hat Enterprise Linux or Fedora.
Solid-State Disks

Eric Sandeen
Server Development Kernel File System
[email protected]
ext3, ext4, XFS, Encrypted File Systems

Mike Snitzer
Server Development Kernel Storage
[email protected]
I/O Stack and Limits
Red Hat Subject Matter Experts

Contributors

Edited by
Marek Suchánek
Red Hat Customer Content Services
[email protected]

Apurva Bhide
Red Hat Customer Content Services
[email protected]
Table of Contents

Table of Contents
.CHAPTER
. . . . . . . . .1.. .OVERVIEW
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7. . . . . . . . . .
1.1. NEW FEATURES AND ENHANCEMENTS IN RED HAT ENTERPRISE LINUX 7 7

. . . . . .I.. FILE
PART . . . . .SYSTEMS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9. . . . . . . . . .

.CHAPTER
. . . . . . . . .2.. .FILE
. . . . SYSTEM
. . . . . . . . STRUCTURE
. . . . . . . . . . . .AND
. . . . MAINTENANCE
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10
...........
2.1. OVERVIEW OF FILESYSTEM HIERARCHY STANDARD (FHS) 10
2.2. SPECIAL RED HAT ENTERPRISE LINUX FILE LOCATIONS 18
2.3. THE /PROC VIRTUAL FILE SYSTEM 18
2.4. DISCARD UNUSED BLOCKS 19

.CHAPTER
. . . . . . . . .3.. .THE
. . . .XFS
. . . .FILE
. . . . SYSTEM
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .20
...........
3.1. CREATING AN XFS FILE SYSTEM 21
3.2. MOUNTING AN XFS FILE SYSTEM 22
3.3. XFS QUOTA MANAGEMENT 23
3.4. INCREASING THE SIZE OF AN XFS FILE SYSTEM 25
3.5. REPAIRING AN XFS FILE SYSTEM 26
3.6. SUSPENDING AN XFS FILE SYSTEM 26
3.7. BACKING UP AND RESTORING XFS FILE SYSTEMS 27
3.8. CONFIGURING ERROR BEHAVIOR 30
3.9. OTHER XFS FILE SYSTEM UTILITIES 32
3.10. MIGRATING FROM EXT4 TO XFS 33

. . . . . . . . . .4.. .THE
CHAPTER . . . .EXT3
. . . . .FILE
. . . . SYSTEM
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .37
...........
4.1. CREATING AN EXT3 FILE SYSTEM 38
4.2. CONVERTING TO AN EXT3 FILE SYSTEM 39
4.3. REVERTING TO AN EXT2 FILE SYSTEM 39

.CHAPTER
. . . . . . . . .5.. .THE
. . . .EXT4
. . . . .FILE
. . . . SYSTEM
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .41
...........
5.1. CREATING AN EXT4 FILE SYSTEM 42
5.2. MOUNTING AN EXT4 FILE SYSTEM 44
5.3. RESIZING AN EXT4 FILE SYSTEM 44
5.4. BACKING UP EXT2, EXT3, OR EXT4 FILE SYSTEMS 45
5.5. RESTORING EXT2, EXT3, OR EXT4 FILE SYSTEMS 46
5.6. OTHER EXT4 FILE SYSTEM UTILITIES 49

.CHAPTER
. . . . . . . . .6.. .BTRFS
. . . . . . (TECHNOLOGY
. . . . . . . . . . . . . . PREVIEW)
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .51
...........
6.1. CREATING A BTRFS FILE SYSTEM 51
6.2. MOUNTING A BTRFS FILE SYSTEM 51
6.3. RESIZING A BTRFS FILE SYSTEM 52
6.4. INTEGRATED VOLUME MANAGEMENT OF MULTIPLE DEVICES 55
6.5. SSD OPTIMIZATION 58
6.6. BTRFS REFERENCES 59

. . . . . . . . . .7.. .GLOBAL
CHAPTER . . . . . . . .FILE
. . . . SYSTEM
. . . . . . . . 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .60
...........

.CHAPTER
. . . . . . . . .8.. .NETWORK
. . . . . . . . . .FILE
. . . .SYSTEM
. . . . . . . .(NFS)
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .61
...........
8.1. INTRODUCTION TO NFS 61
8.2. PNFS 64
8.3. CONFIGURING NFS CLIENT 65
8.4. AUTOFS 66
8.5. COMMON NFS MOUNT OPTIONS 72
8.6. STARTING AND STOPPING THE NFS SERVER 74
8.7. CONFIGURING THE NFS SERVER 75

1
Storage Administration Guide

8.8. SECURING NFS 84


8.9. NFS AND RPCBIND 86
8.10. NFS REFERENCES 88

.CHAPTER
. . . . . . . . .9.. .SERVER
. . . . . . . .MESSAGE
. . . . . . . . . BLOCK
. . . . . . . (SMB)
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .89
...........
9.1. PROVIDING SMB SHARES 89
9.2. MOUNTING AN SMB SHARE 89

.CHAPTER
. . . . . . . . .10.
. . .FS-CACHE
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .95
...........
10.1. PERFORMANCE GUARANTEE 96
10.2. SETTING UP A CACHE 96
10.3. USING THE CACHE WITH NFS 97
10.4. SETTING CACHE CULL LIMITS 99
10.5. STATISTICAL INFORMATION 100
10.6. FS-CACHE REFERENCES 100

. . . . . .II.
PART . .STORAGE
. . . . . . . . . ADMINISTRATION
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .101
............

. . . . . . . . . .11.
CHAPTER . . .STORAGE
. . . . . . . . . CONSIDERATIONS
. . . . . . . . . . . . . . . . .DURING
. . . . . . . .INSTALLATION
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .102
............
11.1. SPECIAL CONSIDERATIONS 102

.CHAPTER
. . . . . . . . .12.
. . .FILE
. . . . SYSTEM
. . . . . . . . CHECK
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .104
............
12.1. BEST PRACTICES FOR FSCK 104
12.2. FILE SYSTEM-SPECIFIC INFORMATION FOR FSCK 105

. . . . . . . . . .13.
CHAPTER . . .PARTITIONS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .109
............
Manipulating Partitions on Devices in Use 109
Modifying the Partition Table 109
13.1. VIEWING THE PARTITION TABLE 110
13.2. CREATING A PARTITION 112
13.3. REMOVING A PARTITION 115
13.4. SETTING A PARTITION TYPE 116
13.5. RESIZING A PARTITION WITH FDISK 116

. . . . . . . . . .14.
CHAPTER . . .CREATING
. . . . . . . . . .AND
. . . . MAINTAINING
. . . . . . . . . . . . .SNAPSHOTS
. . . . . . . . . . . WITH
. . . . . SNAPPER
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .119
............
14.1. CREATING INITIAL SNAPPER CONFIGURATION 119
14.2. CREATING A SNAPPER SNAPSHOT 120
14.3. TRACKING CHANGES BETWEEN SNAPPER SNAPSHOTS 124
14.4. REVERSING CHANGES IN BETWEEN SNAPSHOTS 127
14.5. DELETING A SNAPPER SNAPSHOT 129

. . . . . . . . . .15.
CHAPTER . . .SWAP
. . . . . .SPACE
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .130
............
15.1. ADDING SWAP SPACE 131
15.2. REMOVING SWAP SPACE 133
15.3. MOVING SWAP SPACE 135

. . . . . . . . . .16.
CHAPTER . . .SYSTEM
. . . . . . . .STORAGE
. . . . . . . . . MANAGER
. . . . . . . . . .(SSM)
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .136
............
16.1. SSM BACK ENDS 136
16.2. COMMON SSM TASKS 138
16.3. SSM RESOURCES 145

. . . . . . . . . .17.
CHAPTER . . .DISK
. . . . .QUOTAS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .147
............
17.1. CONFIGURING DISK QUOTAS 147
17.2. MANAGING DISK QUOTAS 152
17.3. DISK QUOTA REFERENCES 154

2
Table of Contents

.CHAPTER
. . . . . . . . .18.
. . .REDUNDANT
. . . . . . . . . . . .ARRAY
. . . . . . .OF
. . .INDEPENDENT
. . . . . . . . . . . . . DISKS
. . . . . . (RAID)
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .156
............
18.1. RAID TYPES 156
18.2. RAID LEVELS AND LINEAR SUPPORT 157
18.3. LINUX RAID SUBSYSTEMS 159
18.4. RAID SUPPORT IN THE ANACONDA INSTALLER 159
18.5. CONVERTING ROOT DISK TO RAID1 AFTER INSTALLATION 160
18.6. CONFIGURING RAID SETS 160
18.7. CREATING ADVANCED RAID DEVICES 161

. . . . . . . . . .19.
CHAPTER . . .USING
. . . . . .THE
. . . . MOUNT
. . . . . . . COMMAND
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .163
............
19.1. LISTING CURRENTLY MOUNTED FILE SYSTEMS 163
19.2. MOUNTING A FILE SYSTEM 164
19.3. UNMOUNTING A FILE SYSTEM 173
19.4. MOUNT COMMAND REFERENCES 174

. . . . . . . . . .20.
CHAPTER . . .THE
. . . .VOLUME_KEY
. . . . . . . . . . . . .FUNCTION
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .175
............
20.1. VOLUME_KEY COMMANDS 175
20.2. USING VOLUME_KEY AS AN INDIVIDUAL USER 176
20.3. USING VOLUME_KEY IN A LARGER ORGANIZATION 177
20.4. VOLUME_KEY REFERENCES 179

.CHAPTER
. . . . . . . . .21.
. . .SOLID-STATE
. . . . . . . . . . . . DISK
. . . . . DEPLOYMENT
. . . . . . . . . . . . . GUIDELINES
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .180
............
Deployment Considerations 180
Performance Tuning Considerations 182

. . . . . . . . . .22.
CHAPTER . . .WRITE
. . . . . . BARRIERS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .183
............
22.1. IMPORTANCE OF WRITE BARRIERS 183
22.2. ENABLING AND DISABLING WRITE BARRIERS 183
22.3. WRITE BARRIER CONSIDERATIONS 184

. . . . . . . . . .23.
CHAPTER . . .STORAGE
. . . . . . . . . I/O
. . . ALIGNMENT
. . . . . . . . . . . AND
. . . . .SIZE
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .186
............
23.1. PARAMETERS FOR STORAGE ACCESS 186
23.2. USERSPACE ACCESS 187
23.3. I/O STANDARDS 188
23.4. STACKING I/O PARAMETERS 189
23.5. LOGICAL VOLUME MANAGER 189
23.6. PARTITION AND FILE SYSTEM TOOLS 189

. . . . . . . . . .24.
CHAPTER . . .SETTING
. . . . . . . . UP
. . . A. .REMOTE
. . . . . . . . DISKLESS
. . . . . . . . . .SYSTEM
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .191
............
24.1. CONFIGURING A TFTP SERVICE FOR DISKLESS CLIENTS 191
24.2. CONFIGURING DHCP FOR DISKLESS CLIENTS 192
24.3. CONFIGURING AN EXPORTED FILE SYSTEM FOR DISKLESS CLIENTS 193

. . . . . . . . . .25.
CHAPTER . . .ONLINE
. . . . . . . STORAGE
. . . . . . . . . MANAGEMENT
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .195
............
25.1. TARGET SETUP 195
25.2. CREATING AN ISCSI INITIATOR 204
25.3. FIBRE CHANNEL 205
25.4. CONFIGURING A FIBRE CHANNEL OVER ETHERNET INTERFACE 208
25.5. CONFIGURING AN FCOE INTERFACE TO AUTOMATICALLY MOUNT AT BOOT 209
25.6. ISCSI 211
25.7. PERSISTENT NAMING 212
25.8. REMOVING A STORAGE DEVICE 216
25.9. REMOVING A PATH TO A STORAGE DEVICE 218
25.10. ADDING A STORAGE DEVICE OR PATH 218
25.11. SCANNING STORAGE INTERCONNECTS 220

3
Storage Administration Guide

25.12. ISCSI DISCOVERY CONFIGURATION 221


25.13. CONFIGURING ISCSI OFFLOAD AND INTERFACE BINDING 222
25.14. SCANNING ISCSI INTERCONNECTS 226
25.15. LOGGING IN TO AN ISCSI TARGET 229
25.16. RESIZING AN ONLINE LOGICAL UNIT 229
25.17. ADDING/REMOVING A LOGICAL UNIT THROUGH RESCAN-SCSI-BUS.SH 233
25.18. MODIFYING LINK LOSS BEHAVIOR 233
25.19. CONTROLLING THE SCSI COMMAND TIMER AND DEVICE STATUS 236
25.20. TROUBLESHOOTING ONLINE STORAGE CONFIGURATION 237
25.21. CONFIGURING MAXIMUM TIME FOR ERROR RECOVERY WITH EH_DEADLINE 238

. . . . . . . . . .26.
CHAPTER . . .DEVICE
. . . . . . .MAPPER
. . . . . . . . MULTIPATHING
. . . . . . . . . . . . . . AND
. . . . .VIRTUAL
. . . . . . . . STORAGE
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .240
............
26.1. VIRTUAL STORAGE 240
26.2. DM-MULTIPATH 240

.CHAPTER
. . . . . . . . .27.
. . .EXTERNAL
. . . . . . . . . .ARRAY
. . . . . . .MANAGEMENT
. . . . . . . . . . . . . .(LIBSTORAGEMGMT)
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .242
............
27.1. INTRODUCTION TO LIBSTORAGEMGMT 242
27.2. LIBSTORAGEMGMT TERMINOLOGY 243
27.3. INSTALLING LIBSTORAGEMGMT 245
27.4. USING LIBSTORAGEMGMT 246

. . . . . . . . . .28.
CHAPTER . . .PERSISTENT
. . . . . . . . . . . .MEMORY:
. . . . . . . . .NVDIMMS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .251
............
NVDIMMs Interleaving 251
Persistent Memory Access Modes 251
28.1. CONFIGURING PERSISTENT MEMORY WITH NDCTL 252
28.2. CONFIGURING PERSISTENT MEMORY FOR USE AS A BLOCK DEVICE (LEGACY MODE) 255
28.3. CONFIGURING PERSISTENT MEMORY FOR FILE SYSTEM DIRECT ACCESS (DAX) 255
28.4. CONFIGURING PERSISTENT MEMORY FOR USE IN DEVICE DAX MODE 256
28.5. TROUBLESHOOTING 257

. . . . . .III.
PART . . DATA
. . . . . .DEDUPLICATION
. . . . . . . . . . . . . . . AND
. . . . .COMPRESSION
. . . . . . . . . . . . . .WITH
. . . . .VDO
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .258
............

. . . . . . . . . .29.
CHAPTER . . .VDO
. . . . INTEGRATION
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .259
............
29.1. THEORETICAL OVERVIEW OF VDO 259
29.2. SYSTEM REQUIREMENTS 262
29.3. GETTING STARTED WITH VDO 265
29.4. ADMINISTERING VDO 269
29.5. DEPLOYMENT SCENARIOS 278
29.6. TUNING VDO 279
29.7. VDO COMMANDS 285
29.8. STATISTICS FILES IN /SYS 303

. . . . . . . . . .30.
CHAPTER . . .VDO
. . . . EVALUATION
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .305
............
30.1. INTRODUCTION 305
30.2. TEST ENVIRONMENT PREPARATIONS 305
30.3. DATA EFFICIENCY TESTING PROCEDURES 309
30.4. PERFORMANCE TESTING PROCEDURES 317
30.5. ISSUE REPORTING 322
30.6. CONCLUSION 323

. . . . . . . . . . A.
APPENDIX . . .RED
. . . .HAT
. . . . CUSTOMER
. . . . . . . . . . .PORTAL
. . . . . . . .LABS
. . . . . RELEVANT
. . . . . . . . . . TO
. . . STORAGE
. . . . . . . . . .ADMINISTRATION
. . . . . . . . . . . . . . . . . . . . . . .324
............
SCSI DECODER 324
FILE SYSTEM LAYOUT CALCULATOR 324
LVM RAID CALCULATOR 324
ISCSI HELPER 324

4
Table of Contents

SAMBA CONFIGURATION HELPER 324


MULTIPATH HELPER 324
NFS HELPER 325
MULTIPATH CONFIGURATION VISUALIZER 325
RHEL BACKUP AND RESTORE ASSISTANT 325

. . . . . . . . . . B.
APPENDIX . . .REVISION
. . . . . . . . .HISTORY
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .326
............

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .327
INDEX ............

5
Storage Administration Guide

6
CHAPTER 1. OVERVIEW

CHAPTER 1. OVERVIEW
The Storage Administration Guide contains extensive information on supported file systems and data
storage features in Red Hat Enterprise Linux 7. This book is intended as a quick reference for
administrators managing single-node (that is, non-clustered) storage solutions.

The Storage Administration Guide is split into the following parts: File Systems, Storage Administration,
and Data Deduplication and Compression with VDO.

The File Systems part details the various file systems Red Hat Enterprise Linux 7 supports. It describes
them and explains how best to utilize them.

The Storage Administration part details the various tools and storage administration tasks Red Hat
Enterprise Linux 7 supports. It describes them and explains how best to utilize them.

The Data Deduplication and Compression with VDO part describes the Virtual Data Optimizer (VDO). It
explains how to use VDO to reduce your storage requirements.

1.1. NEW FEATURES AND ENHANCEMENTS IN RED HAT


ENTERPRISE LINUX 7
Red Hat Enterprise Linux 7 features the following file system enhancements:

eCryptfs not included


As of Red Hat Enterprise Linux 7, eCryptfs is not included. For more information on encrypting file
systems, see Red Hat's Security Guide.

System Storage Manager


Red Hat Enterprise Linux 7 includes a new application called System Storage Manager which provides a
command-line interface to manage various storage technologies. For more information, see Chapter 16,
System Storage Manager (SSM).

XFS Is the Default File System


As of Red Hat Enterprise Linux 7, XFS is the default file system. For more information about the XFS file
system, see Chapter 3, The XFS File System.

File System Restructure


Red Hat Enterprise Linux 7 introduces a new file system structure. The directories /bin, /sbin, /lib,
and /lib64 are now nested under /usr.

Snapper
Red Hat Enterprise Linux 7 introduces a new tool called Snapper that allows for the easy creation and
management of snapshots for LVM and Btrfs. For more information, see Chapter 14, Creating and
Maintaining Snapshots with Snapper.

Btrfs (Technology Preview)

7
Storage Administration Guide

NOTE

Btrfs is available as a Technology Preview feature in Red Hat Enterprise Linux 7 but has
been deprecated since the Red Hat Enterprise Linux 7.4 release. It will be removed in a
future major release of Red Hat Enterprise Linux.

For more information, see Deprecated Functionality in the Red Hat Enterprise Linux 7.4
Release Notes.

Btrfs is a local file system that aims to provide better performance and scalability, including integrated
LVM operations. This file system is not fully supported by Red Hat and as such is a technology preview.
For more information on Btrfs, see Chapter 6, Btrfs (Technology Preview).

NFSv2 No Longer Supported


As of Red Hat Enterprise Linux 7, NFSv2 is no longer supported.

8
PART I. FILE SYSTEMS

PART I. FILE SYSTEMS


The File Systems section provides information on the file system structure and maintenance, the Btrfs
Technology Preview, and file systems that Red Hat fully supports: ext3, ext4, GFS2, XFS, NFS, and FS-
Cache.

NOTE

Btrfs is available as a Technology Preview feature in Red Hat Enterprise Linux 7 but has
been deprecated since the Red Hat Enterprise Linux 7.4 release. It will be removed in a
future major release of Red Hat Enterprise Linux.

For more information, see Deprecated Functionality in the Red Hat Enterprise Linux 7.4
Release Notes.

For an overview of Red Hat Enterprise Linux file systems and storage limits, see Red Hat
Enterprise Linux technology capabilities and limits at Red Hat Knowledgebase.

XFS is the default file system in Red Hat Enterprise Linux 7 and Red Hat, and Red Hat recommends you
to use XFS unless you have a strong reason to use another file system. For general information on
common file systems and their properties, see the following Red Hat Knowledgebase article: How to
Choose your Red Hat Enterprise Linux File System.

9
Storage Administration Guide

CHAPTER 2. FILE SYSTEM STRUCTURE AND MAINTENANCE


The file system structure is the most basic level of organization in an operating system. The way an
operating system interacts with its users, applications, and security model nearly always depends on
how the operating system organizes files on storage devices. Providing a common file system structure
ensures users and programs can access and write files.

File systems break files down into two logical categories:

Shareable and unsharable files


Shareable files can be accessed locally and by remote hosts. Unsharable files are only available
locally.

Variable and static files


Variable files, such as documents, can be changed at any time. Static files, such as binaries, do not
change without an action from the system administrator.

Categorizing files in this manner helps correlate the function of each file with the permissions assigned
to the directories which hold them. How the operating system and its users interact with a file determines
the directory in which it is placed, whether that directory is mounted with read-only or read and write
permissions, and the level of access each user has to that file. The top level of this organization is
crucial; access to the underlying directories can be restricted, otherwise security problems could arise if,
from the top level down, access rules do not adhere to a rigid structure.

2.1. OVERVIEW OF FILESYSTEM HIERARCHY STANDARD (FHS)


Red Hat Enterprise Linux uses the Filesystem Hierarchy Standard (FHS) file system structure, which
defines the names, locations, and permissions for many file types and directories.

The FHS document is the authoritative reference to any FHS-compliant file system, but the standard
leaves many areas undefined or extensible. This section is an overview of the standard and a description
of the parts of the file system not covered by the standard.

The two most important elements of FHS compliance are:

Compatibility with other FHS-compliant systems

The ability to mount a /usr/ partition as read-only. This is crucial, since /usr/ contains
common executables and should not be changed by users. In addition, since /usr/ is mounted
as read-only, it should be mountable from the CD-ROM drive or from another machine via a
read-only NFS mount.

2.1.1. FHS Organization


The directories and files noted here are a small subset of those specified by the FHS document. For the
most complete information, see the latest FHS documentation at
http://refspecs.linuxfoundation.org/FHS_3.0/fhs-3.0.pdf; the file-hierarchy(7) man page also provides an
overview.

NOTE

The directories that are available depend on what is installed on any given system. The
following lists are only an example of what may be found.

10
CHAPTER 2. FILE SYSTEM STRUCTURE AND MAINTENANCE

2.1.1.1. Gathering File System Information

df Command

The df command reports the system's disk space usage. Its output looks similar to the following:

Example 2.1. df Command Output

Filesystem 1K-blocks Used Available Use% Mounted on


/dev/mapper/VolGroup00-LogVol00
11675568 6272120 4810348 57% / /dev/sda1
100691 9281 86211 10% /boot
none 322856 0 322856 0% /dev/shm

By default, df shows the partition size in 1 kilobyte blocks and the amount of used and available disk
space in kilobytes. To view the information in megabytes and gigabytes, use the command df -h. The
-h argument stands for "human-readable" format. The output for df -h looks similar to the following:

Example 2.2. df -h Command Output

Filesystem Size Used Avail Use% Mounted on


/dev/mapper/VolGroup00-LogVol00
12G 6.0G 4.6G 57% / /dev/sda1
99M 9.1M 85M 10% /boot
none 316M 0 316M 0% /dev/shm

NOTE

In the given examples, the mounted partition /dev/shm represents the system's virtual
memory file system.

du Command

The du command displays the estimated amount of space being used by files in a directory, displaying
the disk usage of each subdirectory. The last line in the output of du shows the total disk usage of the
directory. To see only the total disk usage of a directory in human-readable format, use du -hs. For
more options, see man du.

Gnome System Monitor

To view the system's partitions and disk space usage in a graphical format, use the Gnome System
Monitor by clicking on Applications → System Tools → System Monitor or using the command
gnome-system-monitor. Select the File Systems tab to view the system's partitions. The following
figure illustrates the File Systems tab.

11
Storage Administration Guide

Figure 2.1. File Systems Tab in GNOME System Monitor

2.1.1.2. The /boot/ Directory

The /boot/ directory contains static files required to boot the system, for example, the Linux kernel.
These files are essential for the system to boot properly.


WARNING

Do not remove the /boot/ directory. Doing so renders the system unbootable.

2.1.1.3. The /dev/ Directory

The /dev/ directory contains device nodes that represent the following device types:

devices attached to the system;

virtual devices provided by the kernel.

These device nodes are essential for the system to function properly. The udevd daemon creates and
removes device nodes in /dev/ as needed.

Devices in the /dev/ directory and subdirectories are defined as either character (providing only a serial

12
CHAPTER 2. FILE SYSTEM STRUCTURE AND MAINTENANCE

stream of input and output, for example, mouse or keyboard) or block (accessible randomly, such as a
hard drive or a floppy drive). If GNOME or KDE is installed, some storage devices are automatically
detected when connected (such as with USB) or inserted (such as a CD or DVD drive), and a pop-up
window displaying the contents appears.

Table 2.1. Examples of Common Files in the /dev Directory

File Description

/dev/hda The master device on the primary IDE channel.

/dev/hdb The slave device on the primary IDE channel.

/dev/tty0 The first virtual console.

/dev/tty1 The second virtual console.

/dev/sda The first device on the primary SCSI or SATA


channel.

/dev/lp0 The first parallel port.

A valid block device can be one of two types of entries:

Mapped device
A logical volume in a volume group, for example, /dev/mapper/VolGroup00-LogVol02.

Static device
A traditional storage volume, for example, /dev/sdbX, where sdb is a storage device name and X is
the partition number. /dev/sdbX can also be /dev/disk/by-id/WWID, or /dev/disk/by-
uuid/UUID. For more information, see Section 25.7, “Persistent Naming”.

2.1.1.4. The /etc/ Directory

The /etc/ directory is reserved for configuration files that are local to the machine. It should not contain
any binaries; if there are any binaries, move them to /usr/bin/ or /usr/sbin/.

For example, the /etc/skel/ directory stores "skeleton" user files, which are used to populate a home
directory when a user is first created. Applications also store their configuration files in this directory and
may reference them when executed. The /etc/exports file controls which file systems export to
remote hosts.

2.1.1.5. The /mnt/ Directory

The /mnt/ directory is reserved for temporarily mounted file systems, such as NFS file system mounts.
For all removable storage media, use the /media/ directory. Automatically detected removable media is
mounted in the /media directory.

13
Storage Administration Guide

IMPORTANT

The /mnt directory must not be used by installation programs.

2.1.1.6. The /opt/ Directory

The /opt/ directory is normally reserved for software and add-on packages that are not part of the
default installation. A package that installs to /opt/ creates a directory bearing its name, for example,
/opt/packagename/. In most cases, such packages follow a predictable subdirectory structure; most
store their binaries in /opt/packagename/bin/ and their man pages in /opt/packagename/man/.

2.1.1.7. The /proc/ Directory

The /proc/ directory contains special files that either extract information from the kernel or send
information to it. Examples of such information include system memory, CPU information, and hardware
configuration. For more information about /proc/, see Section 2.3, “The /proc Virtual File System”.

2.1.1.8. The /srv/ Directory

The /srv/ directory contains site-specific data served by a Red Hat Enterprise Linux system. This
directory gives users the location of data files for a particular service, such as FTP, WWW, or CVS. Data
that only pertains to a specific user should go in the /home/ directory.

2.1.1.9. The /sys/ Directory

The /sys/ directory utilizes the new sysfs virtual file system specific to the kernel. With the increased
support for hot plug hardware devices in the kernel, the /sys/ directory contains information similar to
that held by /proc/, but displays a hierarchical view of device information specific to hot plug devices.

2.1.1.10. The /usr/ Directory

The /usr/ directory is for files that can be shared across multiple machines. The /usr/ directory is
often on its own partition and is mounted read-only. At a minimum, /usr/ should contain the following
subdirectories:

/usr/bin
This directory is used for binaries.

/usr/etc
This directory is used for system-wide configuration files.

/usr/games
This directory stores games.

/usr/include
This directory is used for C header files.

/usr/kerberos
This directory is used for Kerberos-related binaries and files.

14
CHAPTER 2. FILE SYSTEM STRUCTURE AND MAINTENANCE

/usr/lib
This directory is used for object files and libraries that are not designed to be directly utilized by shell
scripts or users.

As of Red Hat Enterprise Linux 7.0, the /lib/ directory has been merged with /usr/lib. Now it
also contains libraries needed to execute the binaries in /usr/bin/ and /usr/sbin/. These
shared library images are used to boot the system or execute commands within the root file system.

/usr/libexec
This directory contains small helper programs called by other programs.

/usr/sbin
As of Red Hat Enterprise Linux 7.0, /sbin has been moved to /usr/sbin. This means that it
contains all system administration binaries, including those essential for booting, restoring,
recovering, or repairing the system. The binaries in /usr/sbin/ require root privileges to use.

/usr/share
This directory stores files that are not architecture-specific.

/usr/src
This directory stores source code.

/usr/tmp linked to /var/tmp


This directory stores temporary files.

The /usr/ directory should also contain a /local/ subdirectory. As per the FHS, this subdirectory is
used by the system administrator when installing software locally, and should be safe from being
overwritten during system updates. The /usr/local directory has a structure similar to /usr/, and
contains the following subdirectories:

/usr/local/bin

/usr/local/etc

/usr/local/games

/usr/local/include

/usr/local/lib

/usr/local/libexec

/usr/local/sbin

/usr/local/share

/usr/local/src

Red Hat Enterprise Linux's usage of /usr/local/ differs slightly from the FHS. The FHS states that
/usr/local/ should be used to store software that should remain safe from system software
upgrades. Since the RPM Package Manager can perform software upgrades safely, it is not necessary

15
Storage Administration Guide

to protect files by storing them in /usr/local/.

Instead, Red Hat Enterprise Linux uses /usr/local/ for software local to the machine. For instance, if
the /usr/ directory is mounted as a read-only NFS share from a remote host, it is still possible to install
a package or program under the /usr/local/ directory.

2.1.1.11. The /var/ Directory

Since the FHS requires Linux to mount /usr/ as read-only, any programs that write log files or need
spool/ or lock/ directories should write them to the /var/ directory. The FHS states /var/ is for
variable data, which includes spool directories and files, logging data, transient and temporary files.

Following are some of the directories found within the /var/ directory:

/var/account/

/var/arpwatch/

/var/cache/

/var/crash/

/var/db/

/var/empty/

/var/ftp/

/var/gdm/

/var/kerberos/

/var/lib/

/var/local/

/var/lock/

/var/log/

/var/mail linked to /var/spool/mail/

/var/mailman/

/var/named/

/var/nis/

/var/opt/

/var/preserve/

/var/run/

/var/spool/

16
CHAPTER 2. FILE SYSTEM STRUCTURE AND MAINTENANCE

/var/tmp/

/var/tux/

/var/www/

/var/yp/

IMPORTANT

The /var/run/media/user directory contains subdirectories used as mount points for


removable media such as USB storage media, DVDs, CD-ROMs, and Zip disks. Note that
previously, the /media/ directory was used for this purpose.

System log files, such as messages and lastlog, go in the /var/log/ directory. The
/var/lib/rpm/ directory contains RPM system databases. Lock files go in the /var/lock/ directory,
usually in directories for the program using the file. The /var/spool/ directory has subdirectories that
store data files for some programs. These subdirectories include:

/var/spool/at/

/var/spool/clientmqueue/

/var/spool/cron/

/var/spool/cups/

/var/spool/exim/

/var/spool/lpd/

/var/spool/mail/

/var/spool/mailman/

/var/spool/mqueue/

/var/spool/news/

/var/spool/postfix/

/var/spool/repackage/

/var/spool/rwho/

/var/spool/samba/

/var/spool/squid/

/var/spool/squirrelmail/

/var/spool/up2date/

/var/spool/uucp/

17
Storage Administration Guide

/var/spool/uucppublic/

/var/spool/vbox/

2.2. SPECIAL RED HAT ENTERPRISE LINUX FILE LOCATIONS


Red Hat Enterprise Linux extends the FHS structure slightly to accommodate special files.

Most files pertaining to RPM are kept in the /var/lib/rpm/ directory. For more information on RPM,
see man rpm.

The /var/cache/yum/ directory contains files used by the Package Updater, including RPM header
information for the system. This location may also be used to temporarily store RPMs downloaded while
updating the system. For more information about the Red Hat Network, see https://rhn.redhat.com/.

Another location specific to Red Hat Enterprise Linux is the /etc/sysconfig/ directory. This directory
stores a variety of configuration information. Many scripts that run at boot time use the files in this
directory.

2.3. THE /PROC VIRTUAL FILE SYSTEM


Unlike most file systems, /proc contains neither text nor binary files. Because it houses virtual files, the
/proc is referred to as a virtual file system. These virtual files are typically zero bytes in size, even if
they contain a large amount of information.

The /proc file system is not used for storage. Its main purpose is to provide a file-based interface to
hardware, memory, running processes, and other system components. Real-time information can be
retrieved on many system components by viewing the corresponding /proc file. Some of the files within
/proc can also be manipulated (by both users and applications) to configure the kernel.

The following /proc files are relevant in managing and monitoring system storage:

/proc/devices
Displays various character and block devices that are currently configured.

/proc/filesystems
Lists all file system types currently supported by the kernel.

/proc/mdstat
Contains current information on multiple-disk or RAID configurations on the system, if they exist.

/proc/mounts
Lists all mounts currently used by the system.

/proc/partitions
Contains partition block allocation information.

For more information about the /proc file system, see the Red Hat Enterprise Linux 7 Deployment
Guide.

18
CHAPTER 2. FILE SYSTEM STRUCTURE AND MAINTENANCE

2.4. DISCARD UNUSED BLOCKS


Batch discard and online discard operations are features of mounted file systems that discard blocks not
in use by the file system. They are useful for both solid-state drives and thinly-provisioned storage.

Batch discard operations are run explicitly by the user with the fstrim command. This
command discards all unused blocks in a file system that match the user's criteria.

Online discard operations are specified at mount time, either with the -o discard option as
part of a mount command or with the discard option in the /etc/fstab file. They run in real
time without user intervention. Online discard operations only discard blocks that are
transitioning from used to free.

Both operation types are supported for use with ext4 file systems as of Red Hat Enterprise Linux 6.2 and
later and with XFS file systems since Red Hat Enterprise Linux 6.4. Also, the block device underlying the
file system must support physical discard operations. Physical discard operations are supported if the
value stored in the /sys/block/device/queue/discard_max_bytes file is not zero.

If you are executing the fstrim command on:

a device that does not support discard operations, or

a logical device (LVM or MD) comprised of multiple devices, where any one of the device does
not support discard operations

the following message will be displayed:

fstrim -v /mnt/non_discard
fstrim: /mnt/non_discard: the discard operation is not supported

NOTE

The mount command allows you to mount a device that does not support discard
operations with the -o discard option.

Red Hat recommends batch discard operations unless the system's workload is such that batch discard
is not feasible, or online discard operations are necessary to maintain performance.

For more information, see the fstrim(8) and mount(8) man pages.

19
Storage Administration Guide

CHAPTER 3. THE XFS FILE SYSTEM


XFS is a highly scalable, high-performance file system which was originally designed at Silicon Graphics,
Inc. XFS is the default file system for Red Hat Enterprise Linux 7.

Main Features of XFS

XFS supports metadata journaling, which facilitates quicker crash recovery.

The XFS file system can be defragmented and enlarged while mounted and active.

In addition, Red Hat Enterprise Linux 7 supports backup and restore utilities specific to XFS.

Allocation Features
XFS features the following allocation schemes:

Extent-based allocation

Stripe-aware allocation policies

Delayed allocation

Space pre-allocation

Delayed allocation and other performance optimizations affect XFS the same way that they do ext4.
Namely, a program's writes to an XFS file system are not guaranteed to be on-disk unless the
program issues an fsync() call afterwards.

For more information on the implications of delayed allocation on a file system (ext4 and XFS), see
Allocation Features in Chapter 5, The ext4 File System.

NOTE

Creating or expanding files occasionally fails with an unexpected ENOSPC write


failure even though the disk space appears to be sufficient. This is due to XFS's
performance-oriented design. In practice, it does not become a problem since it only
occurs if remaining space is only a few blocks.

Other XFS Features


The XFS file system also supports the following:

Extended attributes (xattr)


This allows the system to associate several additional name/value pairs per file. It is enabled by
default.

Quota journaling
This avoids the need for lengthy quota consistency checks after a crash.

Project/directory quotas
This allows quota restrictions over a directory tree.

Subsecond timestamps

20
CHAPTER 3. THE XFS FILE SYSTEM

This allows timestamps to go to the subsecond.

Default atime behavior is relatime


Relatime is on by default for XFS. It has almost no overhead compared to noatime while still
maintaining sane atime values.

3.1. CREATING AN XFS FILE SYSTEM


Prerequisites
Create or reuse a partition on your disk. For information on creating MBR or GPT partitions, see
Chapter 13, Partitions.

Alternatively, use an LVM or MD volume or a similar layer below XFS.

Procedure

Procedure 3.1. Creating an XFS File System

To create an XFS file system, use the following command:

# mkfs.xfs block_device

Replace block_device with the path to a partition or a logical volume. For example,
/dev/sdb1, /dev/disk/by-uuid/05e99ec8-def1-4a5e-8a9d-5945339ceb2a, or
/dev/my-volgroup/my-lv.

In general, the default options are optimal for common use.

When using mkfs.xfs on a block device containing an existing file system, add the -f
option to overwrite that file system.

Example 3.1. mkfs.xfs Command Output

Following is a sample output of the mkfs.xfs command:

meta-data=/dev/device isize=256 agcount=4, agsize=3277258


blks
= sectsz=512 attr=2
data = bsize=4096 blocks=13109032,
imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0
log =internal log bsize=4096 blocks=6400, version=2
= sectsz=512 sunit=0 blks, lazy-
count=1
realtime =none extsz=4096 blocks=0, rtextents=0

21
Storage Administration Guide

NOTE

After an XFS file system is created, its size cannot be reduced. However, it can still be
enlarged using the xfs_growfs command. For more information, see Section 3.4,
“Increasing the Size of an XFS File System”).

Striped Block Devices


For striped block devices (for example, RAID5 arrays), the stripe geometry can be specified at the time of
file system creation. Using proper stripe geometry greatly enhances the performance of an XFS
filesystem.

When creating filesystems on LVM or MD volumes, mkfs.xfs chooses an optimal geometry. This may
also be true on some hardware RAIDs that export geometry information to the operating system.

If the device exports stripe geometry information, the mkfs utility (for ext3, ext4, and xfs) will
automatically use this geometry. If stripe geometry is not detected by the mkfs utility and even though
the storage does, in fact, have stripe geometry, it is possible to manually specify it when creating the file
system using the following options:

su=value
Specifies a stripe unit or RAID chunk size. The value must be specified in bytes, with an optional k,
m, or g suffix.

sw=value
Specifies the number of data disks in a RAID device, or the number of stripe units in the stripe.

The following example specifies a chunk size of 64k on a RAID device containing 4 stripe units:

# mkfs.xfs -d su=64k,sw=4 /dev/block_device

Additional Resources
For more information about creating XFS file systems, see:

The mkfs.xfs(8) man page

The Red Hat Enterprise Linux Performance Tuning Guide, chapter Tuning XFS

3.2. MOUNTING AN XFS FILE SYSTEM


An XFS file system can be mounted with no extra options, for example:

# mount /dev/device /mount/point

The default for Red Hat Enterprise Linux 7 is inode64.

NOTE

Unlike mke2fs, mkfs.xfs does not utilize a configuration file; they are all specified on
the command line.

Write Barriers

22
CHAPTER 3. THE XFS FILE SYSTEM

By default, XFS uses write barriers to ensure file system integrity even when power is lost to a device
with write caches enabled. For devices without write caches, or with battery-backed write caches,
disable the barriers by using the nobarrier option:

# mount -o nobarrier /dev/device /mount/point

For more information about write barriers, see Chapter 22, Write Barriers.

Direct Access Technology Preview


Since Red Hat Enterprise Linux 7.3, Direct Access (DAX) is available as a Technology Preview on
the ext4 and XFS file systems. It is a means for an application to directly map persistent memory into its
address space. To use DAX, a system must have some form of persistent memory available, usually in
the form of one or more Non-Volatile Dual Inline Memory Modules (NVDIMMs), and a file system that
supports DAX must be created on the NVDIMM(s). Also, the file system must be mounted with the dax
mount option. Then, an mmap of a file on the dax-mounted file system results in a direct mapping of
storage into the application's address space.

3.3. XFS QUOTA MANAGEMENT


The XFS quota subsystem manages limits on disk space (blocks) and file (inode) usage. XFS quotas
control or report on usage of these items on a user, group, or directory or project level. Also, note that
while user, group, and directory or project quotas are enabled independently, group and project quotas
are mutually exclusive.

When managing on a per-directory or per-project basis, XFS manages the disk usage of directory
hierarchies associated with a specific project. In doing so, XFS recognizes cross-organizational "group"
boundaries between projects. This provides a level of control that is broader than what is available when
managing quotas for users or groups.

XFS quotas are enabled at mount time, with specific mount options. Each mount option can also be
specified as noenforce; this allows usage reporting without enforcing any limits. Valid quota mount
options are:

uquota/uqnoenforce: User quotas

gquota/gqnoenforce: Group quotas

pquota/pqnoenforce: Project quota

Once quotas are enabled, the xfs_quota tool can be used to set limits and report on disk usage. By
default, xfs_quota is run interactively, and in basic mode. Basic mode subcommands simply report
usage, and are available to all users. Basic xfs_quota subcommands include:

quota username/userID
Show usage and limits for the given username or numeric userID

df
Shows free and used counts for blocks and inodes.

23
Storage Administration Guide

In contrast, xfs_quota also has an expert mode. The subcommands of this mode allow actual
configuration of limits, and are available only to users with elevated privileges. To use expert mode
subcommands interactively, use the following command:

# xfs_quota -x

Expert mode subcommands include:

report /path
Reports quota information for a specific file system.

limit
Modify quota limits.

For a complete list of subcommands for either basic or expert mode, use the subcommand help.

All subcommands can also be run directly from a command line using the -c option, with -x for expert
subcommands.

Example 3.2. Display a Sample Quota Report

For example, to display a sample quota report for /home (on /dev/blockdevice), use the
command xfs_quota -x -c 'report -h' /home. This displays output similar to the following:

User quota on /home (/dev/blockdevice)


Blocks
User ID Used Soft Hard Warn/Grace
---------- ---------------------------------
root 0 0 0 00 [------]
testuser 103.4G 0 0 00 [------]
...

To set a soft and hard inode count limit of 500 and 700 respectively for user john, whose home
directory is /home/john, use the following command:

# xfs_quota -x -c 'limit isoft=500 ihard=700 john' /home/

In this case, pass mount_point which is the mounted xfs file system.

By default, the limit subcommand recognizes targets as users. When configuring the limits for a
group, use the -g option (as in the previous example). Similarly, use -p for projects.

Soft and hard block limits can also be configured using bsoft or bhard instead of isoft or ihard.

Example 3.3. Set a Soft and Hard Block Limit

For example, to set a soft and hard block limit of 1000m and 1200m, respectively, to group
accounting on the /target/path file system, use the following command:

# xfs_quota -x -c 'limit -g bsoft=1000m bhard=1200m accounting'


/target/path

24
CHAPTER 3. THE XFS FILE SYSTEM

NOTE

The commands bsoft and bhard count by the byte.

IMPORTANT

While real-time blocks (rtbhard/rtbsoft) are described in man xfs_quota as valid


units when setting quotas, the real-time sub-volume is not enabled in this release. As
such, the rtbhard and rtbsoft options are not applicable.

Setting Project Limits


Before configuring limits for project-controlled directories, add them first to /etc/projects. Project
names can be added to /etc/projectid to map project IDs to project names. Once a project is added
to /etc/projects, initialize its project directory using the following command:

# xfs_quota -x -c 'project -s projectname' project_path

Quotas for projects with initialized directories can then be configured, with:

# xfs_quota -x -c 'limit -p bsoft=1000m bhard=1200m projectname'

Generic quota configuration tools (quota, repquota, and edquota for example) may also be used to
manipulate XFS quotas. However, these tools cannot be used with XFS project quotas.

IMPORTANT

Red Hat recommends the use of xfs_quota over all other available tools.

For more information about setting XFS quotas, see man xfs_quota, man projid(5), and man
projects(5).

3.4. INCREASING THE SIZE OF AN XFS FILE SYSTEM


An XFS file system may be grown while mounted using the xfs_growfs command:

# xfs_growfs /mount/point -D size

The -D size option grows the file system to the specified size (expressed in file system blocks).
Without the -D size option, xfs_growfs will grow the file system to the maximum size supported by
the device.

Before growing an XFS file system with -D size, ensure that the underlying block device is of an
appropriate size to hold the file system later. Use the appropriate resizing methods for the affected block
device.

25
Storage Administration Guide

NOTE

While XFS file systems can be grown while mounted, their size cannot be reduced at all.

For more information about growing a file system, see man xfs_growfs.

3.5. REPAIRING AN XFS FILE SYSTEM


To repair an XFS file system, use xfs_repair:

# xfs_repair /dev/device

The xfs_repair utility is highly scalable and is designed to repair even very large file systems with
many inodes efficiently. Unlike other Linux file systems, xfs_repair does not run at boot time, even
when an XFS file system was not cleanly unmounted. In the event of an unclean unmount, xfs_repair
simply replays the log at mount time, ensuring a consistent file system.


WARNING

The xfs_repair utility cannot repair an XFS file system with a dirty log. To clear
the log, mount and unmount the XFS file system. If the log is corrupt and cannot be
replayed, use the -L option ("force log zeroing") to clear the log, that is,
xfs_repair -L /dev/device. Be aware that this may result in further
corruption or data loss.

For more information about repairing an XFS file system, see man xfs_repair.

3.6. SUSPENDING AN XFS FILE SYSTEM


To suspend or resume write activity to a file system, use the following command:

# xfs_freeze mount-point

Suspending write activity allows hardware-based device snapshots to be used to capture the file system
in a consistent state.

NOTE

The xfs_freeze utility is provided by the xfsprogs package, which is only available on
x86_64.

To suspend (that is, freeze) an XFS file system, use:

# xfs_freeze -f /mount/point

To unfreeze an XFS file system, use:

26
CHAPTER 3. THE XFS FILE SYSTEM

# xfs_freeze -u /mount/point

When taking an LVM snapshot, it is not necessary to use xfs_freeze to suspend the file system first.
Rather, the LVM management tools will automatically suspend the XFS file system before taking the
snapshot.

For more information about freezing and unfreezing an XFS file system, see man xfs_freeze.

3.7. BACKING UP AND RESTORING XFS FILE SYSTEMS


XFS file system backup and restoration involve these utilities:

xfsdump for creating the backup

xfsrestore for restoring from backup

3.7.1. Features of XFS Backup and Restoration

Backup

You can use the xfsdump utility to:

Perform backups to regular file images.

Only one backup can be written to a regular file.

Perform backups to tape drives.

The xfsdump utility also allows you to write multiple backups to the same tape. A backup can
span multiple tapes.

To back up multiple file systems to a single tape device, simply write the backup to a tape that
already contains an XFS backup. This appends the new backup to the previous one. By default,
xfsdump never overwrites existing backups.

Create incremental backups.

The xfsdump utility uses dump levels to determine a base backup to which other backups are
relative. Numbers from 0 to 9 refer to increasing dump levels. An incremental backup only backs
up files that have changed since the last dump of a lower level:

To perform a full backup, perform a level 0 dump on the file system.

A level 1 dump is the first incremental backup after a full backup. The next incremental
backup would be level 2, which only backs up files that have changed since the last level 1
dump; and so on, to a maximum of level 9.

Exclude files from a backup using size, subtree, or inode flags to filter them.

Restoration

The xfsrestore utility restores file systems from backups produced by xfsdump. The xfsrestore utility
has two modes:

27
Storage Administration Guide

The simple mode enables users to restore an entire file system from a level 0 dump. This is the
default mode.

The cumulative mode enables file system restoration from an incremental backup: that is, level 1
to level 9.

A unique session ID or session label identifies each backup. Restoring a backup from a tape containing
multiple backups requires its corresponding session ID or label.

To extract, add, or delete specific files from a backup, enter the xfsrestore interactive mode. The
interactive mode provides a set of commands to manipulate the backup files.

3.7.2. Backing Up an XFS File System


This procedure describes how to back up the content of an XFS file system into a file or a tape.

Procedure 3.2. Backing Up an XFS File System

Use the following command to back up an XFS file system:

# xfsdump -l level [-L label] -f backup-destination path-to-xfs-


filesystem

Replace level with the dump level of your backup. Use 0 to perform a full backup or 1 to 9 to
perform consequent incremental backups.

Replace backup-destination with the path where you want to store your backup. The
destination can be a regular file, a tape drive, or a remote tape device. For example,
/backup-files/Data.xfsdump for a file or /dev/st0 for a tape drive.

Replace path-to-xfs-filesystem with the mount point of the XFS file system you want to back
up. For example, /mnt/data/. The file system must be mounted.

When backing up multiple file systems and saving them on a single tape device, add a
session label to each backup using the -L label option so that it is easier to identify them
when restoring. Replace label with any name for your backup: for example, backup_data.

Example 3.4. Backing up Multiple XFS File Systems

To back up the content of XFS file systems mounted on the /boot/ and /data/ directories
and save them as files in the /backup-files/ directory:

# xfsdump -l 0 -f /backup-files/boot.xfsdump /boot


# xfsdump -l 0 -f /backup-files/data.xfsdump /data

To back up multiple file systems on a single tape device, add a session label to each backup
using the -L label option:

# xfsdump -l 0 -L "backup_boot" -f /dev/st0 /boot


# xfsdump -l 0 -L "backup_data" -f /dev/st0 /data

Additional Resources

For more information about backing up XFS file systems, see the xfsdump(8) man page.

28
CHAPTER 3. THE XFS FILE SYSTEM

For more information about backing up XFS file systems, see the xfsdump(8) man page.

3.7.3. Restoring an XFS File System from Backup


This procedure describes how to restore the content of an XFS file system from a file or tape backup.

Prerequisites

You need a file or tape backup of XFS file systems, as described in Section 3.7.2, “Backing Up
an XFS File System”.

Procedure 3.3. Restoring an XFS File System from Backup

The command to restore the backup varies depending on whether you are restoring from a full
backup or an incremental one, or are restoring multiple backups from a single tape device:

# xfsrestore [-r] [-S session-id] [-L session-label] [-i]


-f backup-location restoration-path

Replace backup-location with the location of the backup. This can be a regular file, a tape
drive, or a remote tape device. For example, /backup-files/Data.xfsdump for a file or
/dev/st0 for a tape drive.

Replace restoration-path with the path to the directory where you want to restore the file
system. For example, /mnt/data/.

To restore a file system from an incremental (level 1 to level 9) backup, add the -r option.

To restore a backup from a tape device that contains multiple backups, specify the backup
using the -S or -L options.

The -S lets you choose a backup by its session ID, while the -L lets you choose by the
session label. To obtain the session ID and session labels, use the xfsrestore -I
command.

Replace session-id with the session ID of the backup. For example, b74a3586-e52e-
4a4a-8775-c3334fa8ea2c. Replace session-label with the session label of the backup.
For example, my_backup_session_label.

To use xfsrestore interactively, use the -i option.

The interactive dialog begins after xfsrestore finishes reading the specified device.
Available commands in the interactive xfsrestore shell include cd, ls, add, delete, and
extract; for a complete list of commands, use the help command.

Example 3.5. Restoring Multiple XFS File Systems

To restore the XFS backup files and save their content into directories under /mnt/:

# xfsrestore -f /backup-files/boot.xfsdump /mnt/boot/


# xfsrestore -f /backup-files/data.xfsdump /mnt/data/

To restore from a tape device containing multiple backups, specify each backup by its session label
or session ID:

29
Storage Administration Guide

# xfsrestore -f /dev/st0 -L "backup_boot" /mnt/boot/


# xfsrestore -f /dev/st0 -S "45e9af35-efd2-4244-87bc-4762e476cbab"
/mnt/data/

Informational Messages When Restoring a Backup from a Tape


When restoring a backup from a tape with backups from multiple file systems, the xfsrestore utility
might issue messages. The messages inform you whether a match of the requested backup has been
found when xfsrestore examines each backup on the tape in sequential order. For example:

xfsrestore: preparing drive


xfsrestore: examining media file 0
xfsrestore: inventory session uuid (8590224e-3c93-469c-a311-fc8f23029b2a)
does not match the media header's session uuid (7eda9f86-f1e9-4dfd-b1d4-
c50467912408)
xfsrestore: examining media file 1
xfsrestore: inventory session uuid (8590224e-3c93-469c-a311-fc8f23029b2a)
does not match the media header's session uuid (7eda9f86-f1e9-4dfd-b1d4-
c50467912408)
[...]

The informational messages keep appearing until the matching backup is found.

Additional Resources

For more information about restoring XFS file systems, see the xfsrestore(8) man page.

3.8. CONFIGURING ERROR BEHAVIOR


When an error occurs during an I/O operation, the XFS driver responds in one of two ways:

Continue retries until either:

the I/O operation succeeds, or

an I/O operation retry count or time limit is exceeded.

Consider the error permanent and halt the system.

XFS currently recognizes the following error conditions for which you can configure the desired behavior
specifically:

EIO: Error while trying to write to the device

ENOSPC: No space left on the device

ENODEV: Device cannot be found

All other possible error conditions, which do not have specific handlers defined, share a single, global
configuration.

You can set the conditions under which XFS deems the errors permanent, both in the maximum number
of retries and the maximum time in seconds. XFS stops retrying when any one of the conditions is met.

30
CHAPTER 3. THE XFS FILE SYSTEM

There is also an option to immediately cancel the retries when unmounting the file system, regardless of
any other configuration. This allows the unmount operation to succeed even in case of persistent errors.

3.8.1. Configuration Files for Specific and Undefined Conditions


Configuration files controlling error behavior are located in the /sys/fs/xfs/device/error/
directory.

The /sys/fs/xfs/device/error/metadata/ directory contains subdirectories for each specific


error condition:

/sys/fs/xfs/device/error/metadata/EIO/ for the EIO error condition

/sys/fs/xfs/device/error/metadata/ENODEV/ for the ENODEV error condition

/sys/fs/xfs/device/error/metadata/ENOSPC/ for the ENOSPC error condition

Each one then contains the following configuration files:

/sys/fs/xfs/device/error/metadata/condition/max_retries: controls the


maximum number of times that XFS retries the operation.

/sys/fs/xfs/device/error/metadata/condition/retry_timeout_seconds: the
time limit in seconds after which XFS will stop retrying the operation

All other possible error conditions, apart from those described in the previous section, share a common
configuration in these files:

/sys/fs/xfs/device/error/default/max_retries: controls the maximum number of


retries

/sys/fs/xfs/device/error/default/retry_timeout_seconds: controls the time limit


for retrying

3.8.2. Setting File System Behavior for Specific and Undefined Conditions
To set the maximum number of retries, write the desired number to the max_retries file.

For specific conditions:

# echo value >


/sys/fs/xfs/device/error/metadata/condition/max_retries

For undefined conditions:

# echo value > /sys/fs/xfs/device/error/default/max_retries

value is a number between -1 and the maximum possible value of int, the C signed integer type. This
is 2147483647 on 64-bit Linux.

To set the time limit, write the desired number of seconds to the retry_timeout_seconds file.

For specific conditions:

31
Storage Administration Guide

# echo value >


/sys/fs/xfs/device/error/metadata/condition/retry_timeout_seconds

For undefined conditions:

# echo value >


/sys/fs/xfs/device/error/default/retry_timeout_seconds

value is a number between -1 and 86400, which is the number of seconds in a day.

In both the max_retries and retry_timeout_seconds options, -1 means to retry forever and 0 to
stop immediately.

device is the name of the device, as found in the /dev/ directory; for example, sda.

NOTE

The default behavior for a each error condition is dependent on the error context. Some
errors, like ENODEV, are considered to be fatal and unrecoverable, regardless of the retry
count, so their default value is 0.

3.8.3. Setting Unmount Behavior


If the fail_at_unmount option is set, the file system overrides all other error configurations during
unmount, and immediately umnounts the file system without retrying the I/O operation. This allows the
unmount operation to succeed even in case of persistent errors.

To set the unmount behavior:

# echo value > /sys/fs/xfs/device/error/fail_at_unmount

value is either 1 or 0:

1 means to cancel retrying immediately if an error is found.

0 means to respect the max_retries and retry_timeout_seconds options.

device is the name of the device, as found in the /dev/ directory; for example, sda.

IMPORTANT

The fail_at_unmount option has to be set as desired before attempting to unmount


the file system. After an unmount operation has started, the configuration files and
directories may be unavailable.

3.9. OTHER XFS FILE SYSTEM UTILITIES


Red Hat Enterprise Linux 7 also features other utilities for managing XFS file systems:

xfs_fsr

32
CHAPTER 3. THE XFS FILE SYSTEM

Used to defragment mounted XFS file systems. When invoked with no arguments, xfs_fsr
defragments all regular files in all mounted XFS file systems. This utility also allows users to suspend
a defragmentation at a specified time and resume from where it left off later.

In addition, xfs_fsr also allows the defragmentation of only one file, as in xfs_fsr
/path/to/file. Red Hat advises not to periodically defrag an entire file system because XFS
avoids fragmentation by default. System wide defragmentation could cause the side effect of
fragmentation in free space.

xfs_bmap
Prints the map of disk blocks used by files in an XFS filesystem. This map lists each extent used by a
specified file, as well as regions in the file with no corresponding blocks (that is, holes).

xfs_info
Prints XFS file system information.

xfs_admin
Changes the parameters of an XFS file system. The xfs_admin utility can only modify parameters of
unmounted devices or file systems.

xfs_copy
Copies the contents of an entire XFS file system to one or more targets in parallel.

The following utilities are also useful in debugging and analyzing XFS file systems:

xfs_metadump
Copies XFS file system metadata to a file. Red Hat only supports using the xfs_metadump utility to
copy unmounted file systems or read-only mounted file systems; otherwise, generated dumps could
be corrupted or inconsistent.

xfs_mdrestore
Restores an XFS metadump image (generated using xfs_metadump) to a file system image.

xfs_db
Debugs an XFS file system.

For more information about these utilities, see their respective man pages.

3.10. MIGRATING FROM EXT4 TO XFS


Starting with Red Hat Enterprise Linux 7.0, XFS is the default file system instead of ext4. This section
highlights the differences when using or administering an XFS file system.

The ext4 file system is still fully supported in Red Hat Enterprise Linux 7 and can be selected at
installation. While it is possible to migrate from ext4 to XFS, it is not required.

3.10.1. Differences Between Ext3/4 and XFS


File system repair

33
Storage Administration Guide

Ext3/4 runs e2fsck in userspace at boot time to recover the journal as needed. XFS, by comparison,
performs journal recovery in kernelspace at mount time. An fsck.xfs shell script is provided but
does not perform any useful action as it is only there to satisfy initscript requirements.

When an XFS file system repair or check is requested, use the xfs_repair command. Use the -n
option for a read-only check.

The xfs_repair command will not operate on a file system with a dirty log. To repair such a file
system mount and unmount must first be performed to replay the log. If the log is corrupt and cannot
be replayed, the -L option can be used to zero out in the log.

For more information on file system repair of XFS file systems, see Section 12.2.2, “XFS”

Metadata error behavior


The ext3/4 file system has configurable behavior when metadata errors are encountered, with the
default being to simply continue. When XFS encounters a metadata error that is not recoverable it will
shut down the file system and return a EFSCORRUPTED error. The system logs will contain details of
the error encountered and will recommend running xfs_repair if necessary.

Quotas
XFS quotas are not a remountable option. The -o quota option must be specified on the initial
mount for quotas to be in effect.

While the standard tools in the quota package can perform basic quota administrative tasks (tools
such as setquota and repquota), the xfs_quota tool can be used for XFS-specific features, such as
Project Quota administration.

The quotacheck command has no effect on an XFS file system. The first time quota accounting is
turned on XFS does an automatic quotacheck internally. Because XFS quota metadata is a first-
class, journaled metadata object, the quota system will always be consistent until quotas are
manually turned off.

File system resize


The XFS file system has no utility to shrink a file system. XFS file systems can be grown online via the
xfs_growfs command.

Inode numbers
For file systems larger than 1 TB with 256-byte inodes, or larger than 2 TB with 512-byte inodes, XFS
inode numbers might exceed 2^32. Such large inode numbers cause 32-bit stat calls to fail with the
EOVERFLOW return value. The described problem might occur when using the default Red Hat
Enterprise Linux 7 configuration: non-striped with four allocation groups. A custom configuration, for
example file system extension or changing XFS file system parameters, might lead to a different
behavior.

Applications usually handle such larger inode numbers correctly. If needed, mount the XFS file
system with the -o inode32 parameter to enforce inode numbers below 2^32. Note that using
inode32 does not affect inodes that are already allocated with 64-bit numbers.

34
CHAPTER 3. THE XFS FILE SYSTEM

IMPORTANT

Do not use the inode32 option unless it is required by a specific environment. The
inode32 option changes allocation behavior. As a consequence, the ENOSPC error
might occur if no space is available to allocate inodes in the lower disk blocks.

Speculative preallocation
XFS uses speculative preallocation to allocate blocks past EOF as files are written. This avoids file
fragmentation due to concurrent streaming write workloads on NFS servers. By default, this
preallocation increases with the size of the file and will be apparent in "du" output. If a file with
speculative preallocation is not dirtied for five minutes the preallocation will be discarded. If the inode
is cycled out of cache before that time, then the preallocation will be discarded when the inode is
reclaimed.

If premature ENOSPC problems are seen due to speculative preallocation, a fixed preallocation
amount may be specified with the -o allocsize=amount mount option.

Fragmentation-related tools
Fragmentation is rarely a significant issue on XFS file systems due to heuristics and behaviors, such
as delayed allocation and speculative preallocation. However, tools exist for measuring file system
fragmentation as well as defragmenting file systems. Their use is not encouraged.

The xfs_db frag command attempts to distill all file system allocations into a single fragmentation
number, expressed as a percentage. The output of the command requires significant expertise to
understand its meaning. For example, a fragmentation factor of 75% means only an average of 4
extents per file. For this reason the output of xfs_db's frag is not considered useful and more careful
analysis of any fragmentation problems is recommended.


WARNING

The xfs_fsr command may be used to defragment individual files, or all files
on a file system. The later is especially not recommended as it may destroy
locality of files and may fragment free space.

Commands Used with ext3 and ext4 Compared to XFS

The following table compares common commands used with ext3 and ext4 to their XFS-specific
counterparts.

Table 3.1. Common Commands for ext3 and ext4 Compared to XFS

Task ext3/4 XFS

Create a file system mkfs.ext4 or mkfs.ext3 mkfs.xfs

File system check e2fsck xfs_repair

35
Storage Administration Guide

Task ext3/4 XFS

Resizing a file system resize2fs xfs_growfs

Save an image of a file system e2image xfs_metadump and


xfs_mdrestore

Label or tune a file system tune2fs xfs_admin

Backup a file system dump and restore xfsdump and xfsrestore

The following table lists generic tools that function on XFS file systems as well, but the XFS versions
have more specific functionality and as such are recommended.

Table 3.2. Generic Tools for ext4 and XFS

Task ext4 XFS

Quota quota xfs_quota

File mapping filefrag xfs_bmap

More information on many the listed XFS commands is included in Chapter 3, The XFS File System. You
can also consult the manual pages of the listed XFS administration tools for more information.

36
CHAPTER 4. THE EXT3 FILE SYSTEM

CHAPTER 4. THE EXT3 FILE SYSTEM


The ext3 file system is essentially an enhanced version of the ext2 file system. These improvements
provide the following advantages:

Availability
After an unexpected power failure or system crash (also called an unclean system shutdown), each
mounted ext2 file system on the machine must be checked for consistency by the e2fsck program.
This is a time-consuming process that can delay system boot time significantly, especially with large
volumes containing a large number of files. During this time, any data on the volumes is unreachable.

It is possible to run fsck -n on a live filesystem. However, it will not make any changes and may
give misleading results if partially written metadata is encountered.

If LVM is used in the stack, another option is to take an LVM snapshot of the filesystem and run fsck
on it instead.

Finally, there is the option to remount the filesystem as read only. All pending metadata updates (and
writes) are then forced to the disk prior to the remount. This ensures the filesystem is in a consistent
state, provided there is no previous corruption. It is now possible to run fsck -n.

The journaling provided by the ext3 file system means that this sort of file system check is no longer
necessary after an unclean system shutdown. The only time a consistency check occurs using ext3 is
in certain rare hardware failure cases, such as hard drive failures. The time to recover an ext3 file
system after an unclean system shutdown does not depend on the size of the file system or the
number of files; rather, it depends on the size of the journal used to maintain consistency. The default
journal size takes about a second to recover, depending on the speed of the hardware.

NOTE

The only journaling mode in ext3 supported by Red Hat is data=ordered (default).

Data Integrity
The ext3 file system prevents loss of data integrity in the event that an unclean system shutdown
occurs. The ext3 file system allows you to choose the type and level of protection that your data
receives. With regard to the state of the file system, ext3 volumes are configured to keep a high level
of data consistency by default.

Speed
Despite writing some data more than once, ext3 has a higher throughput in most cases than ext2
because ext3's journaling optimizes hard drive head motion. You can choose from three journaling
modes to optimize speed, but doing so means trade-offs in regards to data integrity if the system was
to fail.

NOTE

The only journaling mode in ext3 supported by Red Hat is data=ordered (default).

Easy Transition

37
Storage Administration Guide

It is easy to migrate from ext2 to ext3 and gain the benefits of a robust journaling file system without
reformatting. For more information on performing this task, see Section 4.2, “Converting to an ext3
File System” .

NOTE

Red Hat Enterprise Linux 7 provides a unified extN driver. It does this by disabling the
ext2 and ext3 configurations and instead uses ext4.ko for these on-disk formats. This
means that kernel messages will always refer to ext4 regardless of the ext file system
used.

4.1. CREATING AN EXT3 FILE SYSTEM


After installation, it is sometimes necessary to create a new ext3 file system. For example, if a new disk
drive is added to the system, you may want to partition the drive and use the ext3 file system.

Prerequisites
Create or reuse a partition on your disk. For information on creating MBR or GPT partitions, see
Chapter 13, Partitions.

Alternatively, use an LVM or MD volume or a similar layer below ext3.

Procedure

Procedure 4.1. Creating an ext3 File System

1. Format the partition or LVM volume with the ext3 file system using the mkfs.ext3 utility:

# mkfs.ext3 block_device

Replace block_device with the path to a partition or a logical volume. For example,
/dev/sdb1, /dev/disk/by-uuid/05e99ec8-def1-4a5e-8a9d-5945339ceb2a, or
/dev/my-volgroup/my-lv.

2. Label the file system using the e2label utility:

# e2label block_device volume_label

Configuring UUID
It is also possible to set a specific UUID for a file system. To specify a UUID when creating a file system,
use the -U option:

# mkfs.ext3 -U UUID device

Replace UUID with the UUID you want to set: for example, 7cd65de3-e0be-41d9-b66d-
96d749c02da7.

Replace device with the path to an ext3 file system to have the UUID added to it: for example,
/dev/sda8.

38
CHAPTER 4. THE EXT3 FILE SYSTEM

To change the UUID of an existing file system, see Section 25.7.3.2, “Modifying Persistent Naming
Attributes”

Additional Resources
The mkfs.ext3(8) man page

The e2label(8) man page

4.2. CONVERTING TO AN EXT3 FILE SYSTEM


The tune2fs command converts an ext2 file system to ext3.

NOTE

To convert ext2 to ext3, always use the e2fsck utility to check your file system before
and after using tune2fs. Before trying to convert ext2 to ext3, back up all file systems in
case any errors occur.

In addition, Red Hat recommends creating a new ext3 file system and migrating data to it,
instead of converting from ext2 to ext3 whenever possible.

To convert an ext2 file system to ext3, log in as root and type the following command in a terminal:

# tune2fs -j block_device

block_device contains the ext2 file system to be converted.

Issue the df command to display mounted file systems.

4.3. REVERTING TO AN EXT2 FILE SYSTEM


In order to revert to an ext2 file system, use the following procedure.

For simplicity, the sample commands in this section use the following value for the block device:

/dev/mapper/VolGroup00-LogVol02

Procedure 4.2. Revert from ext3 to ext2

1. Unmount the partition by logging in as root and typing:

# umount /dev/mapper/VolGroup00-LogVol02

2. Change the file system type to ext2 by typing the following command:

# tune2fs -O ^has_journal /dev/mapper/VolGroup00-LogVol02

3. Check the partition for errors by typing the following command:

# e2fsck -y /dev/mapper/VolGroup00-LogVol02

39
Storage Administration Guide

4. Then mount the partition again as ext2 file system by typing:

# mount -t ext2 /dev/mapper/VolGroup00-LogVol02 /mount/point

Replace /mount/point with the mount point of the partition.

NOTE

If a .journal file exists at the root level of the partition, delete it.

To permanently change the partition to ext2, remember to update the /etc/fstab file, otherwise it will
revert back after booting.

40
CHAPTER 5. THE EXT4 FILE SYSTEM

CHAPTER 5. THE EXT4 FILE SYSTEM


The ext4 file system is a scalable extension of the ext3 file system. With Red Hat Enterprise Linux 7, it
can support a maximum individual file size of 16 terabytes, and file systems to a maximum of 50
terabytes, unlike Red Hat Enterprise Linux 6 which only supported file systems up to 16 terabytes. It also
supports an unlimited number of sub-directories (the ext3 file system only supports up to 32,000), though
once the link count exceeds 65,000 it resets to 1 and is no longer increased. The bigalloc feature is not
currently supported.

NOTE

As with ext3, an ext4 volume must be umounted in order to perform an fsck. For more
information, see Chapter 4, The ext3 File System.

Main Features
The ext4 file system uses extents (as opposed to the traditional block mapping scheme used by ext2
and ext3), which improves performance when using large files and reduces metadata overhead for
large files. In addition, ext4 also labels unallocated block groups and inode table sections
accordingly, which allows them to be skipped during a file system check. This makes for quicker file
system checks, which becomes more beneficial as the file system grows in size.

Allocation Features
The ext4 file system features the following allocation schemes:

Persistent pre-allocation

Delayed allocation

Multi-block allocation

Stripe-aware allocation

Because of delayed allocation and other performance optimizations, ext4's behavior of writing files to
disk is different from ext3. In ext4, when a program writes to the file system, it is not guaranteed to be
on-disk unless the program issues an fsync() call afterwards.

By default, ext3 automatically forces newly created files to disk almost immediately even without
fsync(). This behavior hid bugs in programs that did not use fsync() to ensure that written data
was on-disk. The ext4 file system, on the other hand, often waits several seconds to write out
changes to disk, allowing it to combine and reorder writes for better disk performance than ext3.


WARNING

Unlike ext3, the ext4 file system does not force data to disk on transaction
commit. As such, it takes longer for buffered writes to be flushed to disk. As with
any file system, use data integrity calls such as fsync() to ensure that data is
written to permanent storage.

41
Storage Administration Guide

Other ext4 Features


The ext4 file system also supports the following:

Extended attributes (xattr) — This allows the system to associate several additional name
and value pairs per file.

Quota journaling — This avoids the need for lengthy quota consistency checks after a crash.

NOTE

The only supported journaling mode in ext4 is data=ordered (default).

Subsecond timestamps — This gives timestamps to the subsecond.

5.1. CREATING AN EXT4 FILE SYSTEM


Prerequisites
Create or reuse a partition on your disk. For information on creating MBR or GPT partitions, see
Chapter 13, Partitions.

Alternatively, use an LVM or MD volume or a similar layer below ext4.

Procedure

Procedure 5.1. Creating an ext4 File System

To create an ext4 file system, use the following command:

# mkfs.ext4 block_device

Replace block_device with the path to a partition or a logical volume. For example,
/dev/sdb1, /dev/disk/by-uuid/05e99ec8-def1-4a5e-8a9d-5945339ceb2a, or
/dev/my-volgroup/my-lv.

In general, the default options are optimal for most usage scenarios.

Example 5.1. mkfs.ext4 Command Output

Below is a sample output of this command, which displays the resulting file system geometry and
features:

~]# mkfs.ext4 /dev/sdb1


mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
245280 inodes, 979456 blocks
48972 blocks (5.00%) reserved for the super user
First data block=0

42
CHAPTER 5. THE EXT4 FILE SYSTEM

Maximum filesystem blocks=1006632960


30 block groups
32768 blocks per group, 32768 fragments per group
8176 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736

Writing inode tables: done


Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done

IMPORTANT

It is possible to use tune2fs to enable certain ext4 features on ext3 file systems.
However, using tune2fs in this way has not been fully tested and is therefore not
supported in Red Hat Enterprise Linux 7. As a result, Red Hat cannot guarantee
consistent performance and predictable behavior for ext3 file systems converted or
mounted by using tune2fs.

Striped Block Devices


For striped block devices (for example, RAID5 arrays), the stripe geometry can be specified at the time of
file system creation. Using proper stripe geometry greatly enhances the performance of an ext4 file
system.

When creating file systems on LVM or MD volumes, mkfs.ext4 chooses an optimal geometry. This
may also be true on some hardware RAIDs which export geometry information to the operating system.

To specify stripe geometry, use the -E option of mkfs.ext4 (that is, extended file system options) with
the following sub-options:

stride=value
Specifies the RAID chunk size.

stripe-width=value
Specifies the number of data disks in a RAID device, or the number of stripe units in the stripe.

For both sub-options, value must be specified in file system block units. For example, to create a file
system with a 64k stride (that is, 16 x 4096) on a 4k-block file system, use the following command:

# mkfs.ext4 -E stride=16,stripe-width=64 /dev/block_device

Configuring UUID
It is also possible to set a specific UUID for a file system. To specify a UUID when creating a file system,
use the -U option:

# mkfs.ext4 -U UUID device

Replace UUID with the UUID you want to set: for example, 7cd65de3-e0be-41d9-b66d-
96d749c02da7.

43
Storage Administration Guide

Replace device with the path to an ext4 file system to have the UUID added to it: for example,
/dev/sda8.

To change the UUID of an existing file system, see Section 25.7.3.2, “Modifying Persistent Naming
Attributes”

Additional Resources
For more information about creating ext4 file systems, see:

The mkfs.ext4(8) man page

5.2. MOUNTING AN EXT4 FILE SYSTEM


An ext4 file system can be mounted with no extra options. For example:

# mount /dev/device /mount/point

The ext4 file system also supports several mount options to influence behavior. For example, the acl
parameter enables access control lists, while the user_xattr parameter enables user extended
attributes. To enable both options, use their respective parameters with -o, as in:

# mount -o acl,user_xattr /dev/device /mount/point

As with ext3, the option data_err=abort can be used to abort the journal if an error occurs in file data.

# mount -o data_err=abort /dev/device /mount/point

The tune2fs utility also allows administrators to set default mount options in the file system superblock.
For more information on this, refer to man tune2fs.

Write Barriers
By default, ext4 uses write barriers to ensure file system integrity even when power is lost to a device
with write caches enabled. For devices without write caches, or with battery-backed write caches,
disable barriers using the nobarrier option, as in:

# mount -o nobarrier /dev/device /mount/point

For more information about write barriers, refer to Chapter 22, Write Barriers.

Direct Access Technology Preview


Starting with Red Hat Enterprise Linux 7.3, Direct Access (DAX) provides, as a Technology Preview
on the ext4 and XFS file systems, a means for an application to directly map persistent memory into its
address space. To use DAX, a system must have some form of persistent memory available, usually in
the form of one or more Non-Volatile Dual In-line Memory Modules (NVDIMMs), and a file system that
supports DAX must be created on the NVDIMM(s). Also, the file system must be mounted with the dax
mount option. Then, an mmap of a file on the dax-mounted file system results in a direct mapping of
storage into the application's address space.

5.3. RESIZING AN EXT4 FILE SYSTEM

44
CHAPTER 5. THE EXT4 FILE SYSTEM

Before growing an ext4 file system, ensure that the underlying block device is of an appropriate size to
hold the file system later. Use the appropriate resizing methods for the affected block device.

An ext4 file system may be grown while mounted using the resize2fs command:

# resize2fs /mount/device size

The resize2fs command can also decrease the size of an unmounted ext4 file system:

# resize2fs /dev/device size

When resizing an ext4 file system, the resize2fs utility reads the size in units of file system block size,
unless a suffix indicating a specific unit is used. The following suffixes indicate specific units:

s — 512 byte sectors

K — kilobytes

M — megabytes

G — gigabytes

NOTE

The size parameter is optional (and often redundant) when expanding. The resize2fs
automatically expands to fill all available space of the container, usually a logical volume
or partition.

For more information about resizing an ext4 file system, refer to man resize2fs.

5.4. BACKING UP EXT2, EXT3, OR EXT4 FILE SYSTEMS


This procedure describes how to back up the content of an ext4, ext3, or ext2 file system into a file.

Prerequisites
If the system has been running for a long time, run the e2fsck utility on the partitions before
backup:

# e2fsck /dev/device

Procedure 5.2. Backing up ext2, ext3, or ext4 File Systems

1. Back up configuration information, including the content of the /etc/fstab file and the output of
the fdisk -l command. This is useful for restoring the partitions.

To capture this information, run the sosreport or sysreport utilities. For more information
about sosreport, see the What is a sosreport and how to create one in Red Hat Enterprise
Linux 4.6 and later? Kdowledgebase article.

2. Depending on the role of the partition:

45
Storage Administration Guide

If the partition you are backing up is an operating system partition, boot your system into the
rescue mode. See the Booting to Rescue Mode section of the System Administrator's Guide.

When backing up a regular, data partition, unmount it.

Although it is possible to back up a data partition while it is mounted, the results of backing
up a mounted data partition can be unpredictable.

If you need to back up a mounted file system using the dump utility, do so when the file
system is not under a heavy load. The more activity is happening on the file system when
backing up, the higher the risk of backup corruption is.

3. Use the dump utility to back up the content of the partitions:

# dump -0uf backup-file /dev/device

Replace backup-file with a path to a file where you want the to store the backup. Replace device
with the name of the ext4 partition you want to back up. Make sure that you are saving the
backup to a directory mounted on a different partition than the partition you are backing up.

Example 5.2. Backing up Multiple ext4 Partitions

To back up the content of the /dev/sda1, /dev/sda2, and /dev/sda3 partitions into
backup files stored in the /backup-files/ directory, use the following commands:

# dump -0uf /backup-files/sda1.dump /dev/sda1


# dump -0uf /backup-files/sda2.dump /dev/sda2
# dump -0uf /backup-files/sda3.dump /dev/sda3

To do a remote backup, use the ssh utility or configure a password-less ssh login. For more
information on ssh and password-less login, see the Using the ssh Utility and Using Key-based
Authentication sections of the System Administrator's Guide.

For example, when using ssh:

Example 5.3. Performing a Remote Backup Using ssh

# dump -0u -f - /dev/device | ssh [email protected] dd


of=backup-file

Note that if using standard redirection, you must pass the -f option separately.

Additional Resources
For more information, see the dump(8) man page.

5.5. RESTORING EXT2, EXT3, OR EXT4 FILE SYSTEMS


This procedure describes how to restore an ext4, ext3, or ext2 file system from a file backup.

Prerequisites

46
CHAPTER 5. THE EXT4 FILE SYSTEM

You need a backup of partitions and their metadata, as described in Section 5.4, “Backing up
ext2, ext3, or ext4 File Systems”.

Procedure 5.3. Restoring ext2, ext3, or ext4 File Systems

1. If you are restoring an operating system partition, boot your system into Rescue Mode. See the
Booting to Rescue Mode section of the System Administrator's Guide.

This step is not required for ordinary data partitions.

2. Rebuild the partitions you want to restore by using the fdisk or parted utilites.

If the partitions no longer exist, recreate them. The new partitions must be large enough to
contain the restored data. It is important to get the start and end numbers right; these are the
starting and ending sector numbers of the partitions obtained from the fdisk utility when
backing up.

For more information on modifying partitions, see Chapter 13, Partitions

3. Use the mkfs utility to format the destination partition:

# mkfs.ext4 /dev/device

IMPORTANT

Do not format the partition that stores your backup files.

4. If you created new partitions, re-label all the partitions so they match their entries in the
/etc/fstab file:

# e2label /dev/device label

5. Create temporary mount points and mount the partitions on them:

# mkdir /mnt/device
# mount -t ext4 /dev/device /mnt/device

6. Restore the data from backup on the mounted partition:

# cd /mnt/device
# restore -rf device-backup-file

If you want to restore on a remote machine or restore from a backup file that is stored on a
remote host, you can use the ssh utility. For more information on ssh, see the Using the ssh
Utility section of the System Administrator's Guide.

Note that you need to configure a password-less login for the following commands. For more
information on setting up a password-less ssh login, see the Using Key-based Authentication
section of the System Administrator's Guide.

To restore a partition on a remote machine from a backup file stored on the same machine:

# ssh remote-address "cd /mnt/device && cat backup-file |

47
Storage Administration Guide

/usr/sbin/restore -r -f -"

To restore a partition on a remote machine from a backup file stored on a different remote
machine:

# ssh remote-machine-1 "cd /mnt/device && RSH=/usr/bin/ssh


/usr/sbin/restore -rf remote-machine-2:backup-file"

7. Reboot:

# systemctl reboot

Example 5.4. Restoring Multiple ext4 Partitions

To restore the /dev/sda1, /dev/sda2, and /dev/sda3 partitions from Example 5.2, “Backing up
Multiple ext4 Partitions”:

1. Rebuild partitions you want to restore by using the fdisk command.

2. Format the destination partitions:

# mkfs.ext4 /dev/sda1
# mkfs.ext4 /dev/sda2
# mkfs.ext4 /dev/sda3

3. Re-label all the partitions so they match the /etc/fstab file:

# e2label /dev/sda1 Boot1


# e2label /dev/sda2 Root
# e2label /dev/sda3 Data

4. Prepare the working directories.

Mount the new partitions:

# mkdir /mnt/sda1
# mount -t ext4 /dev/sda1 /mnt/sda1
# mkdir /mnt/sda2
# mount -t ext4 /dev/sda2 /mnt/sda2
# mkdir /mnt/sda3
# mount -t ext4 /dev/sda3 /mnt/sda3

Mount the partition that contains backup files:

# mkdir /backup-files
# mount -t ext4 /dev/sda6 /backup-files

5. Restore the data from backup to the mounted partitions:

# cd /mnt/sda1
# restore -rf /backup-files/sda1.dump
# cd /mnt/sda2

48
CHAPTER 5. THE EXT4 FILE SYSTEM

# restore -rf /backup-files/sda2.dump


# cd /mnt/sda3
# restore -rf /backup-files/sda3.dump

6. Reboot:

# systemctl reboot

Additional Resources
For more information, see the restore(8) man page.

5.6. OTHER EXT4 FILE SYSTEM UTILITIES


Red Hat Enterprise Linux 7 also features other utilities for managing ext4 file systems:

e2fsck
Used to repair an ext4 file system. This tool checks and repairs an ext4 file system more efficiently
than ext3, thanks to updates in the ext4 disk structure.

e2label
Changes the label on an ext4 file system. This tool also works on ext2 and ext3 file systems.

quota
Controls and reports on disk space (blocks) and file (inode) usage by users and groups on an ext4 file
system. For more information on using quota, refer to man quota and Section 17.1, “Configuring
Disk Quotas”.

fsfreeze
To suspend access to a file system, use the command # fsfreeze -f mount-point to freeze it
and # fsfreeze -u mount-point to unfreeze it. This halts access to the file system and creates
a stable image on disk.

NOTE

It is unnecessary to use fsfreeze for device-mapper drives.

For more information see the fsfreeze(8) manpage.

As demonstrated in Section 5.2, “Mounting an ext4 File System”, the tune2fs utility can also adjust
configurable file system parameters for ext2, ext3, and ext4 file systems. In addition, the following tools
are also useful in debugging and analyzing ext4 file systems:

debugfs
Debugs ext2, ext3, or ext4 file systems.

e2image
Saves critical ext2, ext3, or ext4 file system metadata to a file.

49
Storage Administration Guide

For more information about these utilities, refer to their respective man pages.

50
CHAPTER 6. BTRFS (TECHNOLOGY PREVIEW)

CHAPTER 6. BTRFS (TECHNOLOGY PREVIEW)

NOTE

Btrfs is available as a Technology Preview feature in Red Hat Enterprise Linux 7 but has
been deprecated since the Red Hat Enterprise Linux 7.4 release. It will be removed in a
future major release of Red Hat Enterprise Linux.

For more information, see Deprecated Functionality in the Red Hat Enterprise Linux 7.4
Release Notes.

Btrfs is a next generation Linux file system that offers advanced management, reliability, and scalability
features. It is unique in offering snapshots, compression, and integrated device management.

6.1. CREATING A BTRFS FILE SYSTEM


In order to make a basic btrfs file system, use the following command:

# mkfs.btrfs /dev/device

For more information on creating btrfs file systems with added devices and specifying multi-device
profiles for metadata and data, refer to Section 6.4, “Integrated Volume Management of Multiple
Devices”.

6.2. MOUNTING A BTRFS FILE SYSTEM


To mount any device in the btrfs file system use the following command:

# mount /dev/device /mount-point

Other useful mount options include:

device=/dev/name
Appending this option to the mount command tells btrfs to scan the named device for a btrfs volume.
This is used to ensure the mount will succeed as attempting to mount devices that are not btrfs will
cause the mount to fail.

NOTE

This does not mean all devices will be added to the file system, it only scans them.

max_inline=number
Use this option to set the maximum amount of space (in bytes) that can be used to inline data within a
metadata B-tree leaf. The default is 8192 bytes. For 4k pages it is limited to 3900 bytes due to
additional headers that need to fit into the leaf.

alloc_start=number
Use this option to set where in the disk allocations start.

thread_pool=number

51
Storage Administration Guide

Use this option to assign the number of worker threads allocated.

discard
Use this option to enable discard/TRIM on freed blocks.

noacl
Use this option to disable the use of ACL's.

space_cache
Use this option to store the free space data on disk to make caching a block group faster. This is a
persistent change and is safe to boot into old kernels.

nospace_cache
Use this option to disable the above space_cache.

clear_cache
Use this option to clear all the free space caches during mount. This is a safe option but will trigger
the space cache to be rebuilt. As such, leave the file system mounted in order to let the rebuild
process finish. This mount option is intended to be used once and only after problems are apparent
with the free space.

enospc_debug
This option is used to debug problems with "no space left".

recovery
Use this option to enable autorecovery upon mount.

6.3. RESIZING A BTRFS FILE SYSTEM


It is not possible to resize a btrfs file system but it is possible to resize each of the devices it uses. If there
is only one device in use then this works the same as resizing the file system. If there are multiple
devices in use then they must be manually resized to achieve the desired result.

NOTE

The unit size is not case specific; it accepts both G or g for GiB.

The command does not accept t for terabytes or p for petabytes. It only accepts k, m, and
g.

Enlarging a btrfs File System


To enlarge the file system on a single device, use the command:

# btrfs filesystem resize amount /mount-point

For example:

52
CHAPTER 6. BTRFS (TECHNOLOGY PREVIEW)

# btrfs filesystem resize +200M /btrfssingle


Resize '/btrfssingle' of '+200M'

To enlarge a multi-device file system, the device to be enlarged must be specified. First, show all devices
that have a btrfs file system at a specified mount point:

# btrfs filesystem show /mount-point

For example:

# btrfs filesystem show /btrfstest


Label: none uuid: 755b41b7-7a20-4a24-abb3-45fdbed1ab39
Total devices 4 FS bytes used 192.00KiB
devid 1 size 1.00GiB used 224.75MiB path /dev/vdc
devid 2 size 524.00MiB used 204.75MiB path /dev/vdd
devid 3 size 1.00GiB used 8.00MiB path /dev/vde
devid 4 size 1.00GiB used 8.00MiB path /dev/vdf

Btrfs v3.16.2

Then, after identifying the devid of the device to be enlarged, use the following command:

# btrfs filesystem resize devid:amount /mount-point

For example:

# btrfs filesystem resize 2:+200M /btrfstest


Resize '/btrfstest/' of '2:+200M'

NOTE

The amount can also be max instead of a specified amount. This will use all remaining
free space on the device.

Shrinking a btrfs File System


To shrink the file system on a single device, use the command:

# btrfs filesystem resize amount /mount-point

For example:

# btrfs filesystem resize -200M /btrfssingle


Resize '/btrfssingle' of '-200M'

To shrink a multi-device file system, the device to be shrunk must be specified. First, show all devices
that have a btrfs file system at a specified mount point:

# btrfs filesystem show /mount-point

For example:

53
Storage Administration Guide

# btrfs filesystem show /btrfstest


Label: none uuid: 755b41b7-7a20-4a24-abb3-45fdbed1ab39
Total devices 4 FS bytes used 192.00KiB
devid 1 size 1.00GiB used 224.75MiB path /dev/vdc
devid 2 size 524.00MiB used 204.75MiB path /dev/vdd
devid 3 size 1.00GiB used 8.00MiB path /dev/vde
devid 4 size 1.00GiB used 8.00MiB path /dev/vdf

Btrfs v3.16.2

Then, after identifying the devid of the device to be shrunk, use the following command:

# btrfs filesystem resize devid:amount /mount-point

For example:

# btrfs filesystem resize 2:-200M /btrfstest


Resize '/btrfstest' of '2:-200M'

Set the File System Size


To set the file system to a specific size on a single device, use the command:

# btrfs filesystem resize amount /mount-point

For example:

# btrfs filesystem resize 700M /btrfssingle


Resize '/btrfssingle' of '700M'

To set the file system size of a multi-device file system, the device to be changed must be specified.
First, show all devices that have a btrfs file system at the specified mount point:

# btrfs filesystem show /mount-point

For example:

# btrfs filesystem show /btrfstest


Label: none uuid: 755b41b7-7a20-4a24-abb3-45fdbed1ab39
Total devices 4 FS bytes used 192.00KiB
devid 1 size 1.00GiB used 224.75MiB path /dev/vdc
devid 2 size 724.00MiB used 204.75MiB path /dev/vdd
devid 3 size 1.00GiB used 8.00MiB path /dev/vde
devid 4 size 1.00GiB used 8.00MiB path /dev/vdf

Btrfs v3.16.2

Then, after identifying the devid of the device to be changed, use the following command:

# btrfs filesystem resize devid:amount /mount-point

For example:

54
CHAPTER 6. BTRFS (TECHNOLOGY PREVIEW)

# btrfs filesystem resize 2:300M /btrfstest


Resize '/btrfstest' of '2:300M'

6.4. INTEGRATED VOLUME MANAGEMENT OF MULTIPLE DEVICES


A btrfs file system can be created on top of many devices, and more devices can be added after the file
system has been created. By default, metadata will be mirrored across two devices and data will be
striped across all devices present, however if only one device is present, metadata will be duplicated on
that device.

6.4.1. Creating a File System with Multiple Devices


The mkfs.btrfs command, as detailed in Section 6.1, “Creating a btrfs File System”, accepts the
options -d for data, and -m for metadata. Valid specifications are:

raid0

raid1

raid10

dup

single

The -m single option instructs that no duplication of metadata is done. This may be desired when
using hardware raid.

NOTE

RAID 10 requires at least four devices to run correctly.

Example 6.1. Creating a RAID 10 btrfs File System

Create a file system across four devices (metadata mirrored, data striped).

# mkfs.btrfs /dev/device1 /dev/device2 /dev/device3 /dev/device4

Stripe the metadata without mirroring.

# mkfs.btrfs -m raid0 /dev/device1 /dev/device2

Use raid10 for both data and metadata.

# mkfs.btrfs -m raid10 -d raid10 /dev/device1 /dev/device2 /dev/device3


/dev/device4

Do not duplicate metadata on a single drive.

# mkfs.btrfs -m single /dev/device

55
Storage Administration Guide

Use the single option to use the full capacity of each drive when the drives are different sizes.

# mkfs.btrfs -d single /dev/device1 /dev/device2 /dev/device3

To add a new device to an already created multi-device file system, use the following command:

# btrfs device add /dev/device1 /mount-point

After rebooting or reloading the btrfs module, use the btrfs device scan command to discover all
multi-device file systems. See Section 6.4.2, “Scanning for btrfs Devices” for more information.

6.4.2. Scanning for btrfs Devices


Use btrfs device scan to scan all block devices under /dev and probe for btrfs volumes. This must
be performed after loading the btrfs module if running with more than one device in a file system.

To scan all devices, use the following command:

# btrfs device scan

To scan a single device, use the following command:

# btrfs device scan /dev/device

6.4.3. Adding New Devices to a btrfs File System


Use the btrfs filesystem show command to list all the btrfs file systems and which devices they
include.

The btrfs device add command is used to add new devices to a mounted file system.

The btrfs filesystem balance command balances (restripes) the allocated extents across all
existing devices.

An example of all these commands together to add a new device is as follows:

Example 6.2. Add a New Device to a btrfs File System

First, create and mount a btrfs file system. Refer to Section 6.1, “Creating a btrfs File System” for
more information on how to create a btrfs file system, and to Section 6.2, “Mounting a btrfs file
system” for more information on how to mount a btrfs file system.

# mkfs.btrfs /dev/device1
# mount /dev/device1

Next, add a second device to the mounted btrfs file system.

# btrfs device add /dev/device2 /mount-point

The metadata and data on these devices are still stored only on /dev/device1. It must now be
balanced to spread across all devices.

56
CHAPTER 6. BTRFS (TECHNOLOGY PREVIEW)

# btrfs filesystem balance /mount-point

Balancing a file system will take some time as it reads all of the file system's data and metadata and
rewrites it across the new device.

6.4.4. Converting a btrfs File System


To convert a non-raid file system to a raid, add a device and run a balance filter that changes the chunk
allocation profile.

Example 6.3. Converting a btrfs File System

To convert an existing single device system, /dev/sdb1 in this case, into a two device, raid1 system
in order to protect against a single disk failure, use the following commands:

# mount /dev/sdb1 /mnt


# btrfs device add /dev/sdc1 /mnt
# btrfs balance start -dconvert=raid1 -mconvert=raid1 /mnt

IMPORTANT

If the metadata is not converted from the single-device default, it remains as DUP. This
does not guarantee that copies of the block are on separate devices. If data is not
converted it does not have any redundant copies at all.

6.4.5. Removing btrfs Devices


Use the btrfs device delete command to remove an online device. It redistributes any extents in
use to other devices in the file system in order to be safely removed.

Example 6.4. Removing a Device on a btrfs File System

First create and mount a few btrfs file systems.

# mkfs.btrfs /dev/sdb /dev/sdc /dev/sdd /dev/sde


# mount /dev/sdb /mnt

Add some data to the file system.

Finally, remove the required device.

# btrfs device delete /dev/sdc /mnt

6.4.6. Replacing Failed Devices on a btrfs File System


Section 6.4.5, “Removing btrfs Devices” can be used to remove a failed device provided the super block
can still be read. However, if a device is missing or the super block corrupted, the file system will need to
be mounted in a degraded mode:

57
Storage Administration Guide

# mkfs.btrfs -m raid1 /dev/sdb /dev/sdc /dev/sdd /dev/sde

ssd is destroyed or removed, use -o degraded to force the mount


to ignore missing devices

# mount -o degraded /dev/sdb /mnt

'missing' is a special device name

# btrfs device delete missing /mnt

The command btrfs device delete missing removes the first device that is described by the file
system metadata but not present when the file system was mounted.

IMPORTANT

It is impossible to go below the minimum number of devices required for the specific raid
layout, even including the missing one. It may be required to add a new device in order to
remove the failed one.

For example, for a raid1 layout with two devices, if a device fails it is required to:

1. mount in degraded mode,

2. add a new device,

3. and, remove the missing device.

6.4.7. Registering a btrfs File System in /etc/fstab

If you do not have an initrd or it does not perform a btrfs device scan, it is possible to mount a multi-
volume btrfs file system by passing all the devices in the file system explicitly to the mount command.

Example 6.5. Example /etc/fstab Entry

An example of a suitable /etc/fstab entry would be:

/dev/sdb /mnt btrfs


device=/dev/sdb,device=/dev/sdc,device=/dev/sdd,device=/dev/sde 0

Note that using universally unique identifiers (UUIDs) also works and is more stable than using device
paths.

6.5. SSD OPTIMIZATION


Using the btrfs file system can optimize SSD. There are two ways this can be done.

The first way is mkfs.btrfs turns off metadata duplication on a single device when

58
CHAPTER 6. BTRFS (TECHNOLOGY PREVIEW)

/sys/block/device/queue/rotational is zero for the single specified device. This is equivalent


to specifying -m single on the command line. It can be overridden and duplicate metadata forced by
providing the -m dup option. Duplication is not required due to SSD firmware potentially losing both
copies. This wastes space and is a performance cost.

The second way is through a group of SSD mount options: ssd, nossd, and ssd_spread.

The ssd option does several things:

It allows larger metadata cluster allocation.

It allocates data more sequentially where possible.

It disables btree leaf rewriting to match key and block order.

It commits log fragments without batching multiple processes.

NOTE

The ssd mount option only enables the ssd option. Use the nossd option to disable it.

Some SSDs perform best when reusing block numbers often, while others perform much better when
clustering strictly allocates big chunks of unused space. By default, mount -o ssd will find groupings of
blocks where there are several free blocks that might have allocated blocks mixed in. The command
mount -o ssd_spread ensures there are no allocated blocks mixed in. This improves performance
on lower end SSDs.

NOTE

The ssd_spread option enables both the ssd and the ssd_spread options. Use the
nossd to disable both these options.

The ssd_spread option is never automatically set if none of the ssd options are provided
and any of the devices are non-rotational.

These options will all need to be tested with your specific build to see if their use improves or reduces
performance, as each combination of SSD firmware and application loads are different.

6.6. BTRFS REFERENCES


The man page btrfs(8) covers all important management commands. In particular this includes:

All the subvolume commands for managing snapshots.

The device commands for managing devices.

The scrub, balance, and defragment commands.

The man page mkfs.btrfs(8) contains information on creating a btrfs file system including all the
options regarding it.

The man page btrfsck(8) for information regarding fsck on btrfs systems.

59
Storage Administration Guide

CHAPTER 7. GLOBAL FILE SYSTEM 2


The Red Hat Global File System 2 (GFS2) is a native file system that interfaces directly with the Linux
kernel file system interface (VFS layer). When implemented as a cluster file system, GFS2 employs
distributed metadata and multiple journals.

GFS2 is based on 64-bit architecture, which can theoretically accommodate an 8 exabyte file system.
However, the current supported maximum size of a GFS2 file system is 100 TB. If a system requires
GFS2 file systems larger than 100 TB, contact your Red Hat service representative.

When determining the size of a file system, consider its recovery needs. Running the fsck command on
a very large file system can take a long time and consume a large amount of memory. Additionally, in the
event of a disk or disk-subsystem failure, recovery time is limited by the speed of backup media.

When configured in a Red Hat Cluster Suite, Red Hat GFS2 nodes can be configured and managed with
Red Hat Cluster Suite configuration and management tools. Red Hat GFS2 then provides data sharing
among GFS2 nodes in a Red Hat cluster, with a single, consistent view of the file system namespace
across the GFS2 nodes. This allows processes on different nodes to share GFS2 files in the same way
that processes on the same node can share files on a local file system, with no discernible difference.
For information about the Red Hat Cluster Suite, see Red Hat's Cluster Administration guide.

A GFS2 must be built on a logical volume (created with LVM) that is a linear or mirrored volume. Logical
volumes created with LVM in a Red Hat Cluster suite are managed with CLVM (a cluster-wide
implementation of LVM), enabled by the CLVM daemon clvmd, and running in a Red Hat Cluster Suite
cluster. The daemon makes it possible to use LVM2 to manage logical volumes across a cluster,
allowing all nodes in the cluster to share the logical volumes. For information on the Logical Volume
Manager, see Red Hat's Logical Volume Manager Administration guide.

The gfs2.ko kernel module implements the GFS2 file system and is loaded on GFS2 cluster nodes.

For comprehensive information on the creation and configuration of GFS2 file systems in clustered and
non-clustered storage, see Red Hat's Global File System 2 guide.

60
CHAPTER 8. NETWORK FILE SYSTEM (NFS)

CHAPTER 8. NETWORK FILE SYSTEM (NFS)


A Network File System (NFS) allows remote hosts to mount file systems over a network and interact with
those file systems as though they are mounted locally. This enables system administrators to
consolidate resources onto centralized servers on the network.

This chapter focuses on fundamental NFS concepts and supplemental information.

8.1. INTRODUCTION TO NFS


Currently, there are two major versions of NFS included in Red Hat Enterprise Linux:

NFS version 3 (NFSv3) supports safe asynchronous writes and is more robust at error handling
than the previous NFSv2; it also supports 64-bit file sizes and offsets, allowing clients to access
more than 2 GB of file data.

NFS version 4 (NFSv4) works through firewalls and on the Internet, no longer requires an
rpcbind service, supports ACLs, and utilizes stateful operations.

Red Hat Enterprise Linux fully supports NFS version 4.2 (NFSv4.2) since the Red Hat
Enterprise Linux 7.4 release.

Following are the features of NFSv4.2 in Red Hat Enterprise Linux 7.5 :

Server-Side Copy: NFSv4.2 supports copy_file_range() system call, which allows the NFS
client to efficiently copy data without wasting network resources.

Sparse Files: It verifies space efficiency of a file and allows placeholder to improve storage
efficiency. It is a file having one or more holes; holes are unallocated or uninitialized data blocks
consisting only of zeroes. lseek() operation in NFSv4.2, supports seek_hole() and
seek_data(), which allows application to map out the location of holes in the sparse file.

Space Reservation: It permits storage servers to reserve free space, which prohibits servers to
run out of space. NFSv4.2 supports allocate() operation to reserve space, deallocate()
operation to unreserve space, and fallocate() operation to preallocate or deallocate space
in a file.

Labeled NFS: It enforces data access rights and enables SELinux labels between a client and a
server for individual files on an NFS file system.

Layout Enhancements: NFSv4.2 provides new operation, layoutstats(), which the client can
use to notify the metadata server about its communication with the layout.

Versions of Red Hat Enterprise Linux earlier than 7.4 support NFS up to version 4.1.

Following are the features of NFSv4.1:

Enhances performance and security of network, and also includes client-side support for Parallel
NFS (pNFS).

No longer requires a separate TCP connection for callbacks, which allows an NFS server to
grant delegations even when it cannot contact the client. For example, when NAT or a firewall
interferes.

61
Storage Administration Guide

It provides exactly once semantics (except for reboot operations), preventing a previous issue
whereby certain operations could return an inaccurate result if a reply was lost and the operation
was sent twice.

NFS clients attempt to mount using NFSv4.1 by default, and fall back to NFSv4.0 when the server does
not support NFSv4.1. The mount later fall back to NFSv3 when server does not support NFSv4.0.

NOTE

NFS version 2 (NFSv2) is no longer supported by Red Hat.

All versions of NFS can use Transmission Control Protocol (TCP) running over an IP network, with
NFSv4 requiring it. NFSv3 can use the User Datagram Protocol (UDP) running over an IP network to
provide a stateless network connection between the client and server.

When using NFSv3 with UDP, the stateless UDP connection (under normal conditions) has less protocol
overhead than TCP. This can translate into better performance on very clean, non-congested networks.
However, because UDP is stateless, if the server goes down unexpectedly, UDP clients continue to
saturate the network with requests for the server. In addition, when a frame is lost with UDP, the entire
RPC request must be retransmitted; with TCP, only the lost frame needs to be resent. For these
reasons, TCP is the preferred protocol when connecting to an NFS server.

The mounting and locking protocols have been incorporated into the NFSv4 protocol. The server also
listens on the well-known TCP port 2049. As such, NFSv4 does not need to interact with rpcbind [1],
lockd, and rpc.statd daemons. The rpc.mountd daemon is still required on the NFS server to set
up the exports, but is not involved in any over-the-wire operations.

NOTE

TCP is the default transport protocol for NFS version 3 under Red Hat Enterprise Linux.
UDP can be used for compatibility purposes as needed, but is not recommended for wide
usage. NFSv4 requires TCP.

All the RPC/NFS daemons have a '-p' command line option that can set the port,
making firewall configuration easier.

After TCP wrappers grant access to the client, the NFS server refers to the /etc/exports configuration
file to determine whether the client is allowed to access any exported file systems. Once verified, all file
and directory operations are available to the user.

IMPORTANT

In order for NFS to work with a default installation of Red Hat Enterprise Linux with a
firewall enabled, configure IPTables with the default TCP port 2049. Without proper
IPTables configuration, NFS will not function properly.

The NFS initialization script and rpc.nfsd process now allow binding to any specified
port during system start up. However, this can be error-prone if the port is unavailable, or
if it conflicts with another daemon.

8.1.1. Required Services


Red Hat Enterprise Linux uses a combination of kernel-level support and daemon processes to provide

62
CHAPTER 8. NETWORK FILE SYSTEM (NFS)

NFS file sharing. All NFS versions rely on Remote Procedure Calls (RPC) between clients and servers.
RPC services under Red Hat Enterprise Linux 7 are controlled by the rpcbind service. To share or
mount NFS file systems, the following services work together depending on which version of NFS is
implemented:

NOTE

The portmap service was used to map RPC program numbers to IP address port number
combinations in earlier versions of Red Hat Enterprise Linux. This service is now replaced
by rpcbind in Red Hat Enterprise Linux 7 to enable IPv6 support.

nfs
systemctl start nfs starts the NFS server and the appropriate RPC processes to service
requests for shared NFS file systems.

nfslock
systemctl start nfs-lock activates a mandatory service that starts the appropriate RPC
processes allowing NFS clients to lock files on the server.

rpcbind
rpcbind accepts port reservations from local RPC services. These ports are then made available (or
advertised) so the corresponding remote RPC services can access them. rpcbind responds to
requests for RPC services and sets up connections to the requested RPC service. This is not used
with NFSv4.

The following RPC processes facilitate NFS services:

rpc.mountd
This process is used by an NFS server to process MOUNT requests from NFSv3 clients. It checks that
the requested NFS share is currently exported by the NFS server, and that the client is allowed to
access it. If the mount request is allowed, the rpc.mountd server replies with a Success status and
provides the File-Handle for this NFS share back to the NFS client.

rpc.nfsd
rpc.nfsd allows explicit NFS versions and protocols the server advertises to be defined. It works
with the Linux kernel to meet the dynamic demands of NFS clients, such as providing server threads
each time an NFS client connects. This process corresponds to the nfs service.

lockd
lockd is a kernel thread which runs on both clients and servers. It implements theNetwork Lock
Manager (NLM) protocol, which allows NFSv3 clients to lock files on the server. It is started
automatically whenever the NFS server is run and whenever an NFS file system is mounted.

rpc.statd
This process implements the Network Status Monitor (NSM) RPC protocol, which notifies NFS clients
when an NFS server is restarted without being gracefully brought down. rpc.statd is started
automatically by the nfslock service, and does not require user configuration. This is not used with
NFSv4.

63
Storage Administration Guide

rpc.rquotad
This process provides user quota information for remote users. rpc.rquotad is started
automatically by the nfs service and does not require user configuration.

rpc.idmapd
rpc.idmapd provides NFSv4 client and server upcalls, which map between on-the-wire NFSv4
names (strings in the form of user@domain) and local UIDs and GIDs. For idmapd to function with
NFSv4, the /etc/idmapd.conf file must be configured. At a minimum, the "Domain" parameter
should be specified, which defines the NFSv4 mapping domain. If the NFSv4 mapping domain is the
same as the DNS domain name, this parameter can be skipped. The client and server must agree on
the NFSv4 mapping domain for ID mapping to function properly.

NOTE

In Red Hat Enterprise Linux 7, only the NFSv4 server uses rpc.idmapd. The NFSv4
client uses the keyring-based idmapper nfsidmap. nfsidmap is a stand-alone
program that is called by the kernel on-demand to perform ID mapping; it is not a
daemon. If there is a problem with nfsidmap does the client fall back to using
rpc.idmapd. More information regarding nfsidmap can be found on the nfsidmap
man page.

8.2. PNFS
Support for Parallel NFS (pNFS) as part of the NFS v4.1 standard is available as of Red Hat
Enterprise Linux 6.4. The pNFS architecture improves the scalability of NFS, with possible improvements
to performance. That is, when a server implements pNFS as well, a client is able to access data through
multiple servers concurrently. It supports three storage protocols or layouts: files, objects, and blocks.

NOTE

The protocol allows for three possible pNFS layout types: files, objects, and blocks. While
the Red Hat Enterprise Linux 6.4 client only supported the files layout type, Red Hat
Enterprise Linux 7 supports the files layout type, with objects and blocks layout types
being included as a technology preview.

pNFS Flex Files


Flexible Files is a new layout for pNFS that enables the aggregation of standalone NFSv3 and NFSv4
servers into a scale out name space. The Flex Files feature is part of the NFSv4.2 standard as described
in the RFC 7862 specification.

Red Hat Enterprise Linux can mount NFS shares from Flex Files servers since Red Hat Enterprise
Linux 7.4.

Mounting pNFS Shares


To enable pNFS functionality, mount shares from a pNFS-enabled server with NFS version 4.1
or later:

# mount -t nfs -o v4.1 server:/remote-export /local-directory

64
CHAPTER 8. NETWORK FILE SYSTEM (NFS)

After the server is pNFS-enabled, the nfs_layout_nfsv41_files kernel is automatically


loaded on the first mount. The mount entry in the output should contain minorversion=1. Use
the following command to verify the module was loaded:

$ lsmod | grep nfs_layout_nfsv41_files

To mount an NFS share with the Flex Files feature from a server that supports Flex Files, use
NFS version 4.2 or later:

# mount -t nfs -o v4.2 server:/remote-export /local-directory

Verify that the nfs_layout_flexfiles module has been loaded:

$ lsmod | grep nfs_layout_flexfiles

Additional Resources
For more information on pNFS, refer to: http://www.pnfs.com.

8.3. CONFIGURING NFS CLIENT


The mount command mounts NFS shares on the client side. Its format is as follows:

# mount -t nfs -o options server:/remote/export /local/directory

This command uses the following variables:

options
A comma-delimited list of mount options; for more information on valid NFS mount options, see
Section 8.5, “Common NFS Mount Options”.

server
The hostname, IP address, or fully qualified domain name of the server exporting the file system you
wish to mount

/remote/export
The file system or directory being exported from the server, that is, the directory you wish to mount

/local/directory
The client location where /remote/export is mounted

The NFS protocol version used in Red Hat Enterprise Linux 7 is identified by the mount options
nfsvers or vers. By default, mount uses NFSv4 with mount -t nfs. If the server does not support
NFSv4, the client automatically steps down to a version supported by the server. If the nfsvers/vers
option is used to pass a particular version not supported by the server, the mount fails. The file system
type nfs4 is also available for legacy reasons; this is equivalent to running mount -t nfs -o
nfsvers=4 host:/remote/export /local/directory.

For more information, see man mount.

If an NFS share was mounted manually, the share will not be automatically mounted upon reboot. Red

65
Storage Administration Guide

Hat Enterprise Linux offers two methods for mounting remote file systems automatically at boot time: the
/etc/fstab file and the autofs service. For more information, see Section 8.3.1, “Mounting NFS File
Systems Using /etc/fstab” and Section 8.4, “autofs”.

8.3.1. Mounting NFS File Systems Using /etc/fstab

An alternate way to mount an NFS share from another machine is to add a line to the /etc/fstab file.
The line must state the hostname of the NFS server, the directory on the server being exported, and the
directory on the local machine where the NFS share is to be mounted. You must be root to modify the
/etc/fstab file.

Example 8.1. Syntax Example

The general syntax for the line in /etc/fstab is as follows:

server:/usr/local/pub /pub nfs defaults 0 0

The mount point /pub must exist on the client machine before this command can be executed. After
adding this line to /etc/fstab on the client system, use the command mount /pub, and the mount
point /pub is mounted from the server.

A valid /etc/fstab entry to mount an NFS export should contain the following information:

server:/remote/export /local/directory nfs options 0 0

The variables server, /remote/export, /local/directory, and options are the same ones used when
manually mounting an NFS share. For more information, see Section 8.3, “Configuring NFS Client”.

NOTE

The mount point /local/directory must exist on the client before /etc/fstab is read.
Otherwise, the mount fails.

After editing /etc/fstab, regenerate mount units so that your system registers the new configuration:

# systemctl daemon-reload

Additional Resources

For more information about /etc/fstab, refer to man fstab.

8.4. AUTOFS
One drawback of using /etc/fstab is that, regardless of how infrequently a user accesses the NFS
mounted file system, the system must dedicate resources to keep the mounted file system in place. This
is not a problem with one or two mounts, but when the system is maintaining mounts to many systems at
one time, overall system performance can be affected. An alternative to /etc/fstab is to use the
kernel-based automount utility. An automounter consists of two components:

a kernel module that implements a file system, and

66
CHAPTER 8. NETWORK FILE SYSTEM (NFS)

a user-space daemon that performs all of the other functions.

The automount utility can mount and unmount NFS file systems automatically (on-demand mounting),
therefore saving system resources. It can be used to mount other file systems including AFS, SMBFS,
CIFS, and local file systems.

IMPORTANT

The nfs-utils package is now a part of both the 'NFS file server' and the 'Network File
System Client' groups. As such, it is no longer installed by default with the Base group.
Ensure that nfs-utils is installed on the system first before attempting to automount an
NFS share.

autofs is also part of the 'Network File System Client' group.

autofs uses /etc/auto.master (master map) as its default primary configuration file. This can be
changed to use another supported network source and name using the autofs configuration (in
/etc/sysconfig/autofs) in conjunction with the Name Service Switch (NSS) mechanism. An
instance of the autofs version 4 daemon was run for each mount point configured in the master map
and so it could be run manually from the command line for any given mount point. This is not possible
with autofs version 5, because it uses a single daemon to manage all configured mount points; as
such, all automounts must be configured in the master map. This is in line with the usual requirements of
other industry standard automounters. Mount point, hostname, exported directory, and options can all be
specified in a set of files (or other supported network sources) rather than configuring them manually for
each host.

8.4.1. Improvements in autofs Version 5 over Version 4


autofs version 5 features the following enhancements over version 4:

Direct map support


Direct maps in autofs provide a mechanism to automatically mount file systems at arbitrary points in
the file system hierarchy. A direct map is denoted by a mount point of /- in the master map. Entries
in a direct map contain an absolute path name as a key (instead of the relative path names used in
indirect maps).

Lazy mount and unmount support


Multi-mount map entries describe a hierarchy of mount points under a single key. A good example of
this is the -hosts map, commonly used for automounting all exports from a host under /net/host
as a multi-mount map entry. When using the -hosts map, an ls of /net/host will mount autofs
trigger mounts for each export from host. These will then mount and expire them as they are
accessed. This can greatly reduce the number of active mounts needed when accessing a server
with a large number of exports.

Enhanced LDAP support


The autofs configuration file (/etc/sysconfig/autofs) provides a mechanism to specify the
autofs schema that a site implements, thus precluding the need to determine this via trial and error
in the application itself. In addition, authenticated binds to the LDAP server are now supported, using
most mechanisms supported by the common LDAP server implementations. A new configuration file
has been added for this support: /etc/autofs_ldap_auth.conf. The default configuration file is
self-documenting, and uses an XML format.

67
Storage Administration Guide

Proper use of the Name Service Switch (nsswitch) configuration.


The Name Service Switch configuration file exists to provide a means of determining from where
specific configuration data comes. The reason for this configuration is to allow administrators the
flexibility of using the back-end database of choice, while maintaining a uniform software interface to
access the data. While the version 4 automounter is becoming increasingly better at handling the
NSS configuration, it is still not complete. Autofs version 5, on the other hand, is a complete
implementation.

For more information on the supported syntax of this file, see man nsswitch.conf. Not all NSS
databases are valid map sources and the parser will reject ones that are invalid. Valid sources are
files, yp, nis, nisplus, ldap, and hesiod.

Multiple master map entries per autofs mount point


One thing that is frequently used but not yet mentioned is the handling of multiple master map entries
for the direct mount point /-. The map keys for each entry are merged and behave as one map.

Example 8.2. Multiple Master Map Entries per autofs Mount Point

Following is an example in the connectathon test maps for the direct mounts:

/- /tmp/auto_dcthon
/- /tmp/auto_test3_direct
/- /tmp/auto_test4_direct

8.4.2. Configuring autofs


The primary configuration file for the automounter is /etc/auto.master, also referred to as the
master map which may be changed as described in the Section 8.4.1, “Improvements in autofs Version 5
over Version 4”. The master map lists autofs-controlled mount points on the system, and their
corresponding configuration files or network sources known as automount maps. The format of the
master map is as follows:

mount-point map-name options

The variables used in this format are:

mount-point
The autofs mount point, /home, for example.

map-name
The name of a map source which contains a list of mount points, and the file system location from
which those mount points should be mounted.

options
If supplied, these applies to all entries in the given map provided they do not themselves have options
specified. This behavior is different from autofs version 4 where options were cumulative. This has
been changed to implement mixed environment compatibility.

68
CHAPTER 8. NETWORK FILE SYSTEM (NFS)

Example 8.3. /etc/auto.master File

The following is a sample line from /etc/auto.master file (displayed with cat
/etc/auto.master):

/home /etc/auto.misc

The general format of maps is similar to the master map, however the "options" appear between the
mount point and the location instead of at the end of the entry as in the master map:

mount-point [options] location

The variables used in this format are:

mount-point
This refers to the autofs mount point. This can be a single directory name for an indirect mount or
the full path of the mount point for direct mounts. Each direct and indirect map entry key (mount-
point) may be followed by a space separated list of offset directories (subdirectory names each
beginning with a /) making them what is known as a multi-mount entry.

options
Whenever supplied, these are the mount options for the map entries that do not specify their own
options.

location
This refers to the file system location such as a local file system path (preceded with the Sun map
format escape character ":" for map names beginning with /), an NFS file system or other valid file
system location.

The following is a sample of contents from a map file (for example, /etc/auto.misc):

payroll -fstype=nfs personnel:/dev/hda3


sales -fstype=ext3 :/dev/hda4

The first column in a map file indicates the autofs mount point (sales and payroll from the server
called personnel). The second column indicates the options for the autofs mount while the third
column indicates the source of the mount. Following the given configuration, the autofs mount points will
be /home/payroll and /home/sales. The -fstype= option is often omitted and is generally not
needed for correct operation.

The automounter create the directories if they do not exist. If the directories exist before the automounter
was started, the automounter will not remove them when it exits.

To start the automount daemon, use the following command:

# systemctl start autofs

To restart the automount daemon, use the following command:

# systemctl restart autofs

69
Storage Administration Guide

Using the given configuration, if a process requires access to an autofs unmounted directory such as
/home/payroll/2006/July.sxc, the automount daemon automatically mounts the directory. If a
timeout is specified, the directory is automatically unmounted if the directory is not accessed for the
timeout period.

To view the status of the automount daemon, use the following command:

# systemctl status autofs

8.4.3. Overriding or Augmenting Site Configuration Files


It can be useful to override site defaults for a specific mount point on a client system. For example,
consider the following conditions:

Automounter maps are stored in NIS and the /etc/nsswitch.conf file has the following
directive:

automount: files nis

The auto.master file contains:

+auto.master

The NIS auto.master map file contains:

/home auto.home

The NIS auto.home map contains:

beth fileserver.example.com:/export/home/beth
joe fileserver.example.com:/export/home/joe
* fileserver.example.com:/export/home/&

The file map /etc/auto.home does not exist.

Given these conditions, let's assume that the client system needs to override the NIS map auto.home
and mount home directories from a different server. In this case, the client needs to use the following
/etc/auto.master map:

/home ​/etc/auto.home
+auto.master

The /etc/auto.home map contains the entry:

* labserver.example.com:/export/home/&

Because the automounter only processes the first occurrence of a mount point, /home contain the
contents of /etc/auto.home instead of the NIS auto.home map.

70
CHAPTER 8. NETWORK FILE SYSTEM (NFS)

Alternatively, to augment the site-wide auto.home map with just a few entries, create an
/etc/auto.home file map, and in it put the new entries. At the end, include the NISauto.home map.
Then the /etc/auto.home file map looks similar to:

mydir someserver:/export/mydir
+auto.home

With these NIS auto.home map conditions, the ls /home command outputs:

beth joe mydir

This last example works as expected because autofs does not include the contents of a file map of the
same name as the one it is reading. As such, autofs moves on to the next map source in the
nsswitch configuration.

8.4.4. Using LDAP to Store Automounter Maps


LDAP client libraries must be installed on all systems configured to retrieve automounter maps from
LDAP. On Red Hat Enterprise Linux, the openldap package should be installed automatically as a
dependency of the automounter. To configure LDAP access, modify /etc/openldap/ldap.conf.
Ensure that BASE, URI, and schema are set appropriately for your site.

The most recently established schema for storing automount maps in LDAP is described by
rfc2307bis. To use this schema it is necessary to set it in the autofs configuration
(/etc/sysconfig/autofs) by removing the comment characters from the schema definition. For
example:

Example 8.4. Setting autofs Configuration

DEFAULT_MAP_OBJECT_CLASS="automountMap"
DEFAULT_ENTRY_OBJECT_CLASS="automount"
DEFAULT_MAP_ATTRIBUTE="automountMapName"
DEFAULT_ENTRY_ATTRIBUTE="automountKey"
DEFAULT_VALUE_ATTRIBUTE="automountInformation"

Ensure that these are the only schema entries not commented in the configuration. The automountKey
replaces the cn attribute in the rfc2307bis schema. Following is an example of an LDAP Data
Interchange Format (LDIF) configuration:

Example 8.5. LDF Configuration

# extended LDIF
#
# LDAPv3
# base <> with scope subtree
# filter: (&(objectclass=automountMap)(automountMapName=auto.master))
# requesting: ALL
#

# auto.master, example.com
dn: automountMapName=auto.master,dc=example,dc=com
objectClass: top

71
Storage Administration Guide

objectClass: automountMap
automountMapName: auto.master

# extended LDIF
#
# LDAPv3
# base <automountMapName=auto.master,dc=example,dc=com> with scope
subtree
# filter: (objectclass=automount)
# requesting: ALL
#

# /home, auto.master, example.com


dn: automountMapName=auto.master,dc=example,dc=com
objectClass: automount
cn: /home

automountKey: /home
automountInformation: auto.home

# extended LDIF
#
# LDAPv3
# base <> with scope subtree
# filter: (&(objectclass=automountMap)(automountMapName=auto.home))
# requesting: ALL
#

# auto.home, example.com
dn: automountMapName=auto.home,dc=example,dc=com
objectClass: automountMap
automountMapName: auto.home

# extended LDIF
#
# LDAPv3
# base <automountMapName=auto.home,dc=example,dc=com> with scope subtree
# filter: (objectclass=automount)
# requesting: ALL
#

# foo, auto.home, example.com


dn: automountKey=foo,automountMapName=auto.home,dc=example,dc=com
objectClass: automount
automountKey: foo
automountInformation: filer.example.com:/export/foo

# /, auto.home, example.com
dn: automountKey=/,automountMapName=auto.home,dc=example,dc=com
objectClass: automount
automountKey: /
automountInformation: filer.example.com:/export/&

8.5. COMMON NFS MOUNT OPTIONS

72
CHAPTER 8. NETWORK FILE SYSTEM (NFS)

Beyond mounting a file system with NFS on a remote host, it is also possible to specify other options at
mount time to make the mounted share easier to use. These options can be used with manual mount
commands, /etc/fstab settings, and autofs.

The following are options commonly used for NFS mounts:

intr
Allows NFS requests to be interrupted if the server goes down or cannot be reached.

lookupcache=mode
Specifies how the kernel should manage its cache of directory entries for a given mount point. Valid
arguments for mode are all, none, or pos/positive.

nfsvers=version
Specifies which version of the NFS protocol to use, where version is 3 or 4. This is useful for hosts
that run multiple NFS servers. If no version is specified, NFS uses the highest version supported by
the kernel and mount command.

The option vers is identical to nfsvers, and is included in this release for compatibility reasons.

noacl
Turns off all ACL processing. This may be needed when interfacing with older versions of Red Hat
Enterprise Linux, Red Hat Linux, or Solaris, since the most recent ACL technology is not compatible
with older systems.

nolock
Disables file locking. This setting is sometimes required when connecting to very old NFS servers.

noexec
Prevents execution of binaries on mounted file systems. This is useful if the system is mounting a
non-Linux file system containing incompatible binaries.

nosuid
Disables set-user-identifier or set-group-identifier bits. This prevents remote users
from gaining higher privileges by running a setuid program.

port=num
Specifies the numeric value of the NFS server port. If num is 0 (the default value), then mount
queries the remote host's rpcbind service for the port number to use. If the remote host's NFS
daemon is not registered with its rpcbind service, the standard NFS port number of TCP 2049 is
used instead.

rsize=num and wsize=num


These options set the maximum number of bytes to be transfered in a single NFS read or write
operation.

There is no fixed default value for rsize and wsize. By default, NFS uses the largest possible value
that both the server and the client support. In Red Hat Enterprise Linux 7, the client and server
maximum is 1,048,576 bytes. For more details, see the What are the default and maximum values for
rsize and wsize with NFS mounts? KBase article.

73
Storage Administration Guide

sec=mode
Its default setting is sec=sys, which uses local UNIX UIDs and GIDs. These use AUTH_SYS to
authenticate NFS operations.

sec=krb5 uses Kerberos V5 instead of local UNIX UIDs and GIDs to authenticate users.

sec=krb5i uses Kerberos V5 for user authentication and performs integrity checking of NFS
operations using secure checksums to prevent data tampering.

sec=krb5p uses Kerberos V5 for user authentication, integrity checking, and encrypts NFS traffic to
prevent traffic sniffing. This is the most secure setting, but it also involves the most performance
overhead.

tcp
Instructs the NFS mount to use the TCP protocol.

udp
Instructs the NFS mount to use the UDP protocol.

For more information, see man mount and man nfs.

8.6. STARTING AND STOPPING THE NFS SERVER


Prerequisites

For servers that support NFSv2 or NFSv3 connections, the rpcbind[1] service must be running.
To verify that rpcbind is active, use the following command:

$ systemctl status rpcbind

To configure an NFSv4-only server, which does not require rpcbind, see Section 8.7.7,
“Configuring an NFSv4-only Server”.

On Red Hat Enterprise Linux 7.0, if your NFS server exports NFSv3 and is enabled to start at
boot, you need to manually start and enable the nfs-lock service:

# systemctl start nfs-lock


# systemctl enable nfs-lock

On Red Hat Enterprise Linux 7.1 and later, nfs-lock starts automatically if needed, and an
attempt to enable it manually fails.

Procedures
To start an NFS server, use the following command:

# systemctl start nfs

To enable NFS to start at boot, use the following command:

# systemctl enable nfs

74
CHAPTER 8. NETWORK FILE SYSTEM (NFS)

To stop the server, use:

# systemctl stop nfs

The restart option is a shorthand way of stopping and then starting NFS. This is the most
efficient way to make configuration changes take effect after editing the configuration file for
NFS. To restart the server type:

# systemctl restart nfs

After you edit the /etc/sysconfig/nfs file, restart the nfs-config service by running the
following command for the new values to take effect:

# systemctl restart nfs-config

The try-restart command only starts nfs if it is currently running. This command is the
equivalent of condrestart (conditional restart) in Red Hat init scripts and is useful because it
does not start the daemon if NFS is not running.

To conditionally restart the server, type:

# systemctl try-restart nfs

To reload the NFS server configuration file without restarting the service type:

# systemctl reload nfs

8.7. CONFIGURING THE NFS SERVER


There are two ways to configure exports on an NFS server:

Manually editing the NFS configuration file, that is, /etc/exports, and

Through the command line, that is, by using the command exportfs

8.7.1. The /etc/exports Configuration File

The /etc/exports file controls which file systems are exported to remote hosts and specifies options.
It follows the following syntax rules:

Blank lines are ignored.

To add a comment, start a line with the hash mark (#).

You can wrap long lines with a backslash (\).

Each exported file system should be on its own individual line.

Any lists of authorized hosts placed after an exported file system must be separated by space
characters.

75
Storage Administration Guide

Options for each of the hosts must be placed in parentheses directly after the host identifier,
without any spaces separating the host and the first parenthesis.

Each entry for an exported file system has the following structure:

export host(options)

The aforementioned structure uses the following variables:

export
The directory being exported

host
The host or network to which the export is being shared

options
The options to be used for host

It is possible to specify multiple hosts, along with specific options for each host. To do so, list them on the
same line as a space-delimited list, with each hostname followed by its respective options (in
parentheses), as in:

export host1(options1) host2(options2) host3(options3)

For information on different methods for specifying hostnames, see Section 8.7.5, “Hostname Formats”.

In its simplest form, the /etc/exports file only specifies the exported directory and the hosts permitted
to access it, as in the following example:

Example 8.6. The /etc/exports File

/exported/directory bob.example.com

Here, bob.example.com can mount /exported/directory/ from the NFS server. Because no
options are specified in this example, NFS uses default settings.

The default settings are:

ro
The exported file system is read-only. Remote hosts cannot change the data shared on the file
system. To allow hosts to make changes to the file system (that is, read and write), specify the rw
option.

sync
The NFS server will not reply to requests before changes made by previous requests are written to
disk. To enable asynchronous writes instead, specify the option async.

wdelay
The NFS server will delay writing to the disk if it suspects another write request is imminent. This can

76
CHAPTER 8. NETWORK FILE SYSTEM (NFS)

improve performance as it reduces the number of times the disk must be accessed by separate write
commands, thereby reducing write overhead. To disable this, specify the no_wdelay. no_wdelay is
only available if the default sync option is also specified.

root_squash
This prevents root users connected remotely (as opposed to locally) from having root privileges;
instead, the NFS server assigns them the user ID nfsnobody. This effectively "squashes" the power
of the remote root user to the lowest local user, preventing possible unauthorized writes on the remote
server. To disable root squashing, specify no_root_squash.

To squash every remote user (including root), use all_squash. To specify the user and group IDs that
the NFS server should assign to remote users from a particular host, use the anonuid and anongid
options, respectively, as in:

export host(anonuid=uid,anongid=gid)

Here, uid and gid are user ID number and group ID number, respectively. The anonuid and anongid
options allow you to create a special user and group account for remote NFS users to share.

By default, access control lists (ACLs) are supported by NFS under Red Hat Enterprise Linux. To disable
this feature, specify the no_acl option when exporting the file system.

Each default for every exported file system must be explicitly overridden. For example, if the rw option is
not specified, then the exported file system is shared as read-only. The following is a sample line from
/etc/exports which overrides two default options:

/another/exported/directory 192.168.0.3(rw,async)

In this example 192.168.0.3 can mount /another/exported/directory/ read and write and all
writes to disk are asynchronous. For more information on exporting options, see man exportfs.

Other options are available where no default value is specified. These include the ability to disable sub-
tree checking, allow access from insecure ports, and allow insecure file locks (necessary for certain early
NFS client implementations). For more information on these less-used options, see man exports.

IMPORTANT

The format of the /etc/exports file is very precise, particularly in regards to use of the
space character. Remember to always separate exported file systems from hosts and
hosts from one another with a space character. However, there should be no other space
characters in the file except on comment lines.

For example, the following two lines do not mean the same thing:

/home bob.example.com(rw)
/home bob.example.com (rw)

The first line allows only users from bob.example.com read and write access to the
/home directory. The second line allows users from bob.example.com to mount the
directory as read-only (the default), while the rest of the world can mount it read/write.

8.7.2. The exportfs Command

77
Storage Administration Guide

Every file system being exported to remote users with NFS, as well as the access level for those file
systems, are listed in the /etc/exports file. When the nfs service starts, the /usr/sbin/exportfs
command launches and reads this file, passes control to rpc.mountd (if NFSv3) for the actual mounting
process, then to rpc.nfsd where the file systems are then available to remote users.

When issued manually, the /usr/sbin/exportfs command allows the root user to selectively export
or unexport directories without restarting the NFS service. When given the proper options, the
/usr/sbin/exportfs command writes the exported file systems to /var/lib/nfs/xtab. Since
rpc.mountd refers to the xtab file when deciding access privileges to a file system, changes to the list
of exported file systems take effect immediately.

The following is a list of commonly-used options available for /usr/sbin/exportfs:

-r
Causes all directories listed in /etc/exports to be exported by constructing a new export list in
/etc/lib/nfs/xtab. This option effectively refreshes the export list with any changes made to
/etc/exports.

-a
Causes all directories to be exported or unexported, depending on what other options are passed to
/usr/sbin/exportfs. If no other options are specified, /usr/sbin/exportfs exports all file
systems specified in /etc/exports.

-o file-systems
Specifies directories to be exported that are not listed in /etc/exports. Replace file-systems with
additional file systems to be exported. These file systems must be formatted in the same way they
are specified in /etc/exports. This option is often used to test an exported file system before
adding it permanently to the list of file systems to be exported. For more information on
/etc/exports syntax, see Section 8.7.1, “The /etc/exports Configuration File”.

-i
Ignores /etc/exports; only options given from the command line are used to define exported file
systems.

-u
Unexports all shared directories. The command /usr/sbin/exportfs -ua suspends NFS file
sharing while keeping all NFS daemons up. To re-enable NFS sharing, use exportfs -r.

-v
Verbose operation, where the file systems being exported or unexported are displayed in greater
detail when the exportfs command is executed.

If no options are passed to the exportfs command, it displays a list of currently exported file systems.
For more information about the exportfs command, see man exportfs.

8.7.2.1. Using exportfs with NFSv4

In Red Hat Enterprise Linux 7, no extra steps are required to configure NFSv4 exports as any filesystems
mentioned are automatically available to NFSv3 and NFSv4 clients using the same path. This was not
the case in previous versions.

78
CHAPTER 8. NETWORK FILE SYSTEM (NFS)

To prevent clients from using NFSv4, turn it off by setting RPCNFSDARGS= -N 4 in


/etc/sysconfig/nfs.

8.7.3. Running NFS Behind a Firewall


NFS requires rpcbind, which dynamically assigns ports for RPC services and can cause issues for
configuring firewall rules. To allow clients to access NFS shares behind a firewall, edit the
/etc/sysconfig/nfs file to set which ports the RPC services run on.

The /etc/sysconfig/nfs file does not exist by default on all systems. If /etc/sysconfig/nfs
does not exist, create it and specify the following:

RPCMOUNTDOPTS="-p port"
This adds "-p port" to the rpc.mount command line: rpc.mount -p port.

To specify the ports to be used by the nlockmgr service, set the port number for the nlm_tcpport and
nlm_udpport options in the /etc/modprobe.d/lockd.conf file.

If NFS fails to start, check /var/log/messages. Commonly, NFS fails to start if you specify a port
number that is already in use. After editing /etc/sysconfig/nfs, you need to restart the nfs-
config service for the new values to take effect in Red Hat Enterprise Linux 7.2 and prior by running:

# systemctl restart nfs-config

Then, restart the NFS server:

# systemctl restart nfs-server

Run rpcinfo -p to confirm the changes have taken effect.

NOTE

To allow NFSv4.0 callbacks to pass through firewalls set


/proc/sys/fs/nfs/nfs_callback_tcpport and allow the server to connect to that
port on the client.

This process is not needed for NFSv4.1 or higher, and the other ports for mountd,
statd, and lockd are not required in a pure NFSv4 environment.

8.7.3.1. Discovering NFS exports

There are two ways to discover which file systems an NFS server exports.

On any server that supports NFSv3, use the showmount command:

$ showmount -e myserver
Export list for mysever
/exports/foo
/exports/bar

On any server that supports NFSv4, mount the root directory and look around.

79
Storage Administration Guide

# mount myserver:/ /mnt/


# cd /mnt/
exports
# ls exports
foo
bar

On servers that support both NFSv4 and NFSv3, both methods work and give the same results.

NOTE

Before Red Hat Enterprise Linux 6 on older NFS servers, depending on how they are
configured, it is possible to export filesystems to NFSv4 clients at different paths. Because
these servers do not enable NFSv4 by default, this should not be a problem.

8.7.4. Accessing RPC Quota through a Firewall


If you export a file system that uses disk quotas, you can use the quota Remote Procedure Call (RPC)
service to provide disk quota data to NFS clients.

Procedure 8.1. Making RPC Quota Accessible Behind a Firewall

1. To enable the rpc-rquotad service, use the following command:

# systemctl enable rpc-rquotad

2. To start the rpc-rquotad service, use the following command:

# systemctl start rpc-rquotad

Note that rpc-rquotad is, if enabled, started automatically after starting the nfs-server
service.

3. To make the quota RPC service accessible behind a firewall, UDP or TCP port 875 need to be
open. The default port number is defined in the /etc/services file.

You can override the default port number by appending -p port-number to the
RPCRQUOTADOPTS variable in the /etc/sysconfig/rpc-rquotad file.

4. Restart rpc-rquotad for changes in the /etc/sysconfig/rpc-rquotad file to take effect:

# systemctl restart rpc-rquotad

Setting Quotas from Remote Hosts


By default, quotas can only be read by remote hosts. To allow setting quotas, append the -S option to
the RPCRQUOTADOPTS variable in the /etc/sysconfig/rpc-rquotad file.

Restart rpc-rquotad for changes in the /etc/sysconfig/rpc-rquotad file to take effect:

# systemctl restart rpc-rquotad

8.7.5. Hostname Formats

80
CHAPTER 8. NETWORK FILE SYSTEM (NFS)

The host(s) can be in the following forms:

Single machine
A fully-qualified domain name (that can be resolved by the server), hostname (that can be resolved
by the server), or an IP address.

Series of machines specified with wildcards


Use the * or ? character to specify a string match. Wildcards are not to be used with IP addresses;
however, they may accidentally work if reverse DNS lookups fail. When specifying wildcards in fully
qualified domain names, dots (.) are not included in the wildcard. For example, *.example.com
includes one.example.com but does not include one.two.example.com.

IP networks
Use a.b.c.d/z, where a.b.c.d is the network and z is the number of bits in the netmask (for example
192.168.0.0/24). Another acceptable format is a.b.c.d/netmask, where a.b.c.d is the network and
netmask is the netmask (for example, 192.168.100.8/255.255.255.0).

Netgroups
Use the format @group-name, where group-name is the NIS netgroup name.

8.7.6. Enabling NFS over RDMA (NFSoRDMA)


The remote direct memory access (RDMA) service works automatically in Red Hat Enterprise Linux 7 if
there is RDMA-capable hardware present.

To enable NFS over RDMA:

1. Install the rdma and rdma-core packages.

The /etc/rdma/rdma.conf file contains a line that sets XPRTRDMA_LOAD=yes by default,


which requests the rdma service to load the NFSoRDMA client module.

2. To enable automatic loading of NFSoRDMA server modules, add SVCRDMA_LOAD=yes on a


new line in /etc/rdma/rdma.conf.

RPCNFSDARGS="--rdma=20049" in the /etc/sysconfig/nfs file specifies the port number


on which the NFSoRDMA service listens for clients. RFC 5667 specifies that servers must listen
on port 20049 when providing NFSv4 services over RDMA.

3. Restart the nfs service after editing the /etc/rdma/rdma.conf file:

# systemctl restart nfs

Note that with earlier kernel versions, a system reboot is needed after editing
/etc/rdma/rdma.conf for the changes to take effect.

8.7.7. Configuring an NFSv4-only Server


By default, the NFS server supports NFSv2, NFSv3, and NFSv4 connections in Red Hat Enterprise
Linux 7. However, you can also configure NFS to support only NFS version 4.0 and later. This minimizes
the number of open ports and running services on the system, because NFSv4 does not require the
rpcbind service to listen on the network.

81
Storage Administration Guide

When your NFS server is configured as NFSv4-only, clients attempting to mount shares using NFSv2 or
NFSv3 fail with an error like the following:

Requested NFS version or transport protocol is not supported.

Procedure 8.2. Configuring an NFSv4-only Server

To configure your NFS server to support only NFS version 4.0 and later:

1. Disable NFSv2, NFSv3, and UDP by adding the following line to the /etc/sysconfig/nfs
configuration file:

RPCNFSDARGS="-N 2 -N 3 -U"

2. Optionally, disable listening for the RPCBIND, MOUNT, and NSM protocol calls, which are not
necessary in the NFSv4-only case.

The effects of disabling these options are:

Clients that attempt to mount shares from your server using NFSv2 or NFSv3 become
unresponsive.

The NFS server itself is unable to mount NFSv2 and NFSv3 file systems.

To disable these options:

Add the following to the /etc/sysconfig/nfs file:

RPCMOUNTDOPTS="-N 2 -N 3"

Disable related services:

# systemctl mask --now rpc-statd.service rpcbind.service


rpcbind.socket

3. Restart the NFS server:

# systemctl restart nfs

The changes take effect as soon as you start or restart the NFS server.

Verifying the NFSv4-only Configuration

You can verify that your NFS server is configured in the NFSv4-only mode by using the netstat utility.

The following is an example netstat output on an NFSv4-only server; listening for RPCBIND,
MOUNT, and NSM is also disabled. Here, nfs is the only listening NFS service:

# netstat -ltu

Active Internet connections (only servers)


Proto Recv-Q Send-Q Local Address Foreign Address
State

82
CHAPTER 8. NETWORK FILE SYSTEM (NFS)

tcp 0 0 0.0.0.0:nfs 0.0.0.0:*


LISTEN
tcp 0 0 0.0.0.0:ssh 0.0.0.0:*
LISTEN
tcp 0 0 localhost:smtp 0.0.0.0:*
LISTEN
tcp6 0 0 [::]:nfs [::]:*
LISTEN
tcp6 0 0 [::]:12432 [::]:*
LISTEN
tcp6 0 0 [::]:12434 [::]:*
LISTEN
tcp6 0 0 localhost:7092 [::]:*
LISTEN
tcp6 0 0 [::]:ssh [::]:*
LISTEN
udp 0 0 localhost:323 0.0.0.0:*
udp 0 0 0.0.0.0:bootpc 0.0.0.0:*
udp6 0 0 localhost:323 [::]:*

In comparison, the netstat output before configuring an NFSv4-only server includes the
sunrpc and mountd services:

# netstat -ltu

Active Internet connections (only servers)


Proto Recv-Q Send-Q Local Address Foreign Address
State
tcp 0 0 0.0.0.0:nfs 0.0.0.0:*
LISTEN
tcp 0 0 0.0.0.0:36069 0.0.0.0:*
LISTEN
tcp 0 0 0.0.0.0:52364 0.0.0.0:*
LISTEN
tcp 0 0 0.0.0.0:sunrpc 0.0.0.0:*
LISTEN
tcp 0 0 0.0.0.0:mountd 0.0.0.0:*
LISTEN
tcp 0 0 0.0.0.0:ssh 0.0.0.0:*
LISTEN
tcp 0 0 localhost:smtp 0.0.0.0:*
LISTEN
tcp6 0 0 [::]:34941 [::]:*
LISTEN
tcp6 0 0 [::]:nfs [::]:*
LISTEN
tcp6 0 0 [::]:sunrpc [::]:*
LISTEN
tcp6 0 0 [::]:mountd [::]:*
LISTEN
tcp6 0 0 [::]:12432 [::]:*
LISTEN
tcp6 0 0 [::]:56881 [::]:*
LISTEN
tcp6 0 0 [::]:12434 [::]:*
LISTEN

83
Storage Administration Guide

tcp6 0 0 localhost:7092 [::]:*


LISTEN
tcp6 0 0 [::]:ssh [::]:*
LISTEN
udp 0 0 localhost:323 0.0.0.0:*
udp 0 0 0.0.0.0:37190 0.0.0.0:*
udp 0 0 0.0.0.0:876 0.0.0.0:*
udp 0 0 localhost:877 0.0.0.0:*
udp 0 0 0.0.0.0:mountd 0.0.0.0:*
udp 0 0 0.0.0.0:38588 0.0.0.0:*
udp 0 0 0.0.0.0:nfs 0.0.0.0:*
udp 0 0 0.0.0.0:bootpc 0.0.0.0:*
udp 0 0 0.0.0.0:sunrpc 0.0.0.0:*
udp6 0 0 localhost:323 [::]:*
udp6 0 0 [::]:57683 [::]:*
udp6 0 0 [::]:876 [::]:*
udp6 0 0 [::]:mountd [::]:*
udp6 0 0 [::]:40874 [::]:*
udp6 0 0 [::]:nfs [::]:*
udp6 0 0 [::]:sunrpc [::]:*

8.8. SECURING NFS


NFS is suitable for transparent sharing of entire file systems with a large number of known hosts.
However, with ease-of-use comes a variety of potential security problems. To minimize NFS security
risks and protect data on the server, consider the following sections when exporting NFS file systems on
a server or mounting them on a client.

8.8.1. NFS Security with AUTH_SYS and Export Controls


Traditionally, NFS has given two options in order to control access to exported files.

First, the server restricts which hosts are allowed to mount which file systems either by IP address or by
host name.

Second, the server enforces file system permissions for users on NFS clients in the same way it does
local users. Traditionally it does this using AUTH_SYS (also called AUTH_UNIX) which relies on the client
to state the UID and GID's of the user. Be aware that this means a malicious or misconfigured client can
easily get this wrong and allow a user access to files that it should not.

To limit the potential risks, administrators often allow read-only access or squash user permissions to a
common user and group ID. Unfortunately, these solutions prevent the NFS share from being used in the
way it was originally intended.

Additionally, if an attacker gains control of the DNS server used by the system exporting the NFS file
system, the system associated with a particular hostname or fully qualified domain name can be pointed
to an unauthorized machine. At this point, the unauthorized machine is the system permitted to mount
the NFS share, since no username or password information is exchanged to provide additional security
for the NFS mount.

Wildcards should be used sparingly when exporting directories through NFS, as it is possible for the
scope of the wildcard to encompass more systems than intended.

It is also possible to restrict access to the rpcbind[1] service with TCP wrappers. Creating rules with
iptables can also limit access to ports used by rpcbind, rpc.mountd, and rpc.nfsd.

84
CHAPTER 8. NETWORK FILE SYSTEM (NFS)

For more information on securing NFS and rpcbind, refer to man iptables.

8.8.2. NFS Security with AUTH_GSS

NFSv4 revolutionized NFS security by mandating the implementation of RPCSEC_GSS and the
Kerberos version 5 GSS-API mechanism. However, RPCSEC_GSS and the Kerberos mechanism are
also available for all versions of NFS. In FIPS mode, only FIPS-approved algorithms can be used.

Unlike AUTH_SYS, with the RPCSEC_GSS Kerberos mechanism, the server does not depend on the
client to correctly represent which user is accessing the file. Instead, cryptography is used to
authenticate users to the server, which prevents a malicious client from impersonating a user without
having that user's Kerberos credentials. Using the RPCSEC_GSS Kerberos mechanism is the most
straightforward way to secure mounts because after configuring Kerberos, no additional setup is
needed.

Configuring Kerberos
Before configuring an NFSv4 Kerberos-aware server, you need to install and configure a Kerberos Key
Distribution Centre (KDC). Kerberos is a network authentication system that allows clients and servers to
authenticate to each other by using symmetric encryption and a trusted third party, the KDC. Red Hat
recommends using Identity Management (IdM) for setting up Kerberos.

Procedure 8.3. Configuring an NFS Server and Client for IdM to Use RPCSEC_GSS

1. Create the nfs/hostname.domain@REALM principal on the NFS server side.

Create the host/hostname.domain@REALM principal on both the server and the client
side.

Add the corresponding keys to keytabs for the client and server.

For instructions, see the Adding and Editing Service Entries and Keytabs and Setting up a
Kerberos-aware NFS Server sections in the Red Hat Enterprise Linux 7 Linux Domain Identity,
Authentication, and Policy Guide.

2. On the server side, use the sec= option to enable the wanted security flavors. To enable all
security flavors as well as non-cryptographic mounts:

/export *(sec=sys:krb5:krb5i:krb5p)

Valid security flavors to use with the sec= option are:

sys: no cryptographic protection, the default

krb5: authentication only

krb5i: integrity protection

krb5p: privacy protection

3. On the client side, add sec=krb5 (or sec=krb5i, or sec=krb5p, depending on the setup) to
the mount options:

# mount -o sec=krb5 server:/export /mnt

85
Storage Administration Guide

For information on how to configure a NFS client, see the Setting up a Kerberos-aware NFS
Client section in the Red Hat Enterprise Linux 7 Linux Domain Identity, Authentication, and
Policy Guide.

Although Red Hat recommends using IdM, Active Directory (AD) Kerberos servers are also supported.
For details, see the following Red Hat Knowledgebase article: How to set up NFS using Kerberos
authentication on RHEL 7 using SSSD and Active Directory.

For more information, see the exports(5) and nfs(5) manual pages, and Section 8.5, “Common NFS
Mount Options”.

For further information on the RPCSEC_GSS framework, including how gssproxy and rpc.gssd inter-
operate, see the GSSD flow description.

8.8.2.1. NFS Security with NFSv4

NFSv4 includes ACL support based on the Microsoft Windows NT model, not the POSIX model,
because of the Microsoft Windows NT model's features and wide deployment.

Another important security feature of NFSv4 is the removal of the use of the MOUNT protocol for mounting
file systems. The MOUNT protocol presented a security risk because of the way the protocol processed
file handles.

8.8.3. File Permissions


Once the NFS file system is mounted as either read or read and write by a remote host, the only
protection each shared file has is its permissions. If two users that share the same user ID value mount
the same NFS file system, they can modify each others' files. Additionally, anyone logged in as root on
the client system can use the su - command to access any files with the NFS share.

By default, access control lists (ACLs) are supported by NFS under Red Hat Enterprise Linux. Red Hat
recommends that this feature is kept enabled.

By default, NFS uses root squashing when exporting a file system. This sets the user ID of anyone
accessing the NFS share as the root user on their local machine to nobody. Root squashing is
controlled by the default option root_squash; for more information about this option, refer to
Section 8.7.1, “The /etc/exports Configuration File”. If possible, never disable root squashing.

When exporting an NFS share as read-only, consider using the all_squash option. This option makes
every user accessing the exported file system take the user ID of the nfsnobody user.

8.9. NFS AND RPCBIND

NOTE

The following section only applies to NFSv3 implementations that require the rpcbind
service for backward compatibility.

For information on how to configure an NFSv4-only server, which does not need
rpcbind, see Section 8.7.7, “Configuring an NFSv4-only Server”.

The rpcbind[1] utility maps RPC services to the ports on which they listen. RPC processes notify
rpcbind when they start, registering the ports they are listening on and the RPC program numbers they

86
CHAPTER 8. NETWORK FILE SYSTEM (NFS)

expect to serve. The client system then contacts rpcbind on the server with a particular RPC program
number. The rpcbind service redirects the client to the proper port number so it can communicate with
the requested service.

Because RPC-based services rely on rpcbind to make all connections with incoming client requests,
rpcbind must be available before any of these services start.

The rpcbind service uses TCP wrappers for access control, and access control rules for rpcbind
affect all RPC-based services. Alternatively, it is possible to specify access control rules for each of the
NFS RPC daemons. The man pages for rpc.mountd and rpc.statd contain information regarding the
precise syntax for these rules.

8.9.1. Troubleshooting NFS and rpcbind

Because rpcbind[1] provides coordination between RPC services and the port numbers used to
communicate with them, it is useful to view the status of current RPC services using rpcbind when
troubleshooting. The rpcinfo command shows each RPC-based service with port numbers, an RPC
program number, a version number, and an IP protocol type (TCP or UDP).

To make sure the proper NFS RPC-based services are enabled for rpcbind, use the following
command:

# rpcinfo -p

Example 8.7. rpcinfo -p command output

The following is sample output from this command:

program vers proto port service


100021 1 udp 32774 nlockmgr
100021 3 udp 32774 nlockmgr
100021 4 udp 32774 nlockmgr
100021 1 tcp 34437 nlockmgr
100021 3 tcp 34437 nlockmgr
100021 4 tcp 34437 nlockmgr
100011 1 udp 819 rquotad
100011 2 udp 819 rquotad
100011 1 tcp 822 rquotad
100011 2 tcp 822 rquotad
100003 2 udp 2049 nfs
100003 3 udp 2049 nfs
100003 2 tcp 2049 nfs
100003 3 tcp 2049 nfs
100005 1 udp 836 mountd
100005 1 tcp 839 mountd
100005 2 udp 836 mountd
100005 2 tcp 839 mountd
100005 3 udp 836 mountd
100005 3 tcp 839 mountd

87
Storage Administration Guide

If one of the NFS services does not start up correctly, rpcbind will be unable to map RPC requests from
clients for that service to the correct port. In many cases, if NFS is not present in rpcinfo output,
restarting NFS causes the service to correctly register with rpcbind and begin working.

For more information and a list of options on rpcinfo, see its man page.

8.10. NFS REFERENCES


Administering an NFS server can be a challenge. Many options, including quite a few not mentioned in
this chapter, are available for exporting or mounting NFS shares. For more information, see the following
sources:

Installed Documentation
man mount — Contains a comprehensive look at mount options for both NFS server and client
configurations.

man fstab — Provides detail for the format of the /etc/fstab file used to mount file systems
at boot-time.

man nfs — Provides details on NFS-specific file system export and mount options.

man exports — Shows common options used in the /etc/exports file when exporting NFS
file systems.

Useful Websites
http://linux-nfs.org — The current site for developers where project status updates can be
viewed.

http://nfs.sourceforge.net/ — The old home for developers which still contains a lot of useful
information.

http://www.citi.umich.edu/projects/nfsv4/linux/ — An NFSv4 for Linux 2.6 kernel resource.

http://citeseer.ist.psu.edu/viewdoc/summary?doi=10.1.1.111.4086 — An excellent whitepaper on


the features and enhancements of the NFS Version 4 protocol.

Related Books
Managing NFS and NIS by Hal Stern, Mike Eisler, and Ricardo Labiaga; O'Reilly & Associates
— Makes an excellent reference guide for the many different NFS export and mount options
available.

NFS Illustrated by Brent Callaghan; Addison-Wesley Publishing Company — Provides


comparisons of NFS to other network file systems and shows, in detail, how NFS communication
occurs.

[1] The rpcbind service replaces portmap, which was used in previous versions of Red Hat Enterprise Linux to
map RPC program numbers to IP address port number combinations. For more information, refer to Section 8.1.1,
“Required Services”.

88
CHAPTER 9. SERVER MESSAGE BLOCK (SMB)

CHAPTER 9. SERVER MESSAGE BLOCK (SMB)


The Server Message Block (SMB) protocol implements an application-layer network protocol used to
access resources on a server, such as file shares and shared printers. On Microsoft Windows, SMB is
implemented by default. If you run Red Hat Enterprise Linux, use Samba to provide SMB shares and the
cifs-utils utility to mount SMB shares from a remote server.

NOTE

In the context of SMB, you sometimes read about the Common Internet File System
(CIFS) protocol, which is a dialect of SMB. Both the SMB and CIFS protocol are
supported and the kernel module and utilities involved in mounting SMB and CIFS shares
both use the name cifs.

9.1. PROVIDING SMB SHARES


See the Samba section in the Red Hat System Administrator's Guide.

9.2. MOUNTING AN SMB SHARE


On Red Hat Enterprise Linux, the cifs.ko file system module of the kernel provides support for the
SMB protocol. However, to mount and work with SMB shares, you must also install the cifs-utils
package:

# yum install cifs-utils

The cifs-utils package provides utilities to:

Mount SMB and CIFS shares

Manage NT Lan Manager (NTLM) credentials in the kernel's keyring

Set and display Access Control Lists (ACL) in a security descriptor on SMB and CIFS shares

9.2.1. Supported SMB Protocol Versions


The cifs.ko kernel module supports the following SMB protocol versions:

SMB 1

SMB 2.0

SMB 2.1

SMB 3.0

NOTE

Depending on the protocol version, not all SMB features are implemented.

9.2.1.1. UNIX Extensions Support

89
Storage Administration Guide

Samba uses the CAP_UNIX capability bit in the SMB protocol to provide the UNIX extensions feature.
These extensions are also supported by the cifs.ko kernel module. However, both Samba and the
kernel module support UNIX extensions only in the SMB 1 protocol.

To use UNIX extensions:

1. Set the server min protocol option in the [global] section in the
/etc/samba/smb.conf file to NT1. This is the default on Samba servers.

2. Mount the share using the SMB 1 protocol by providing the -o vers=1.0 option to the mount
command. For example:

mount -t cifs -o vers=1.0,username=user_name


//server_name/share_name /mnt/

By default, the kernel module uses SMB 2 or the highest later protocol version supported by the
server. Passing the -o vers=1.0 option to the mount command forces that the kernel module
uses the SMB 1 protocol that is required for using UNIX extensions.

To verify if UNIX extensions are enabled, display the options of the mounted share:

# mount
...
//server/share on /mnt type cifs (...,unix,...)

If the unix entry is displayed in the list of mount options, UNIX extensions are enabled.

9.2.2. Manually Mounting an SMB Share


To manually mount an SMB share, use the mount utility with the -t cifs parameter:

# mount -t cifs -o username=user_name //server_name/share_name /mnt/


Password for user_name@//server_name/share_name: ********

In the -o options parameter, you can specify options that will be used to mount the share. For details,
see Section 9.2.6, “Frequently Used Mount Options” and the OPTIONS section in the mount.cifs(8) man
page.

Example 9.1. Mounting a Share Using an Encrypted SMB 3.0 Connection

To mount the \\server\example\ share as the DOMAIN\Administrator user over an


encrypted SMB 3.0 connection into the /mnt/ directory:

# mount -t cifs -o username=DOMAIN\Administrator,seal,vers=3.0


//server/example /mnt/
Password for user_name@//server_name/share_name: ********

9.2.3. Mounting an SMB Share Automatically When the System Boots


To mount an SMB share automatically when the system boots, add an entry for the share to the
/etc/fstab file. For example:

90
CHAPTER 9. SERVER MESSAGE BLOCK (SMB)

//server_name/share_name /mnt cifs credentials=/root/smb.cred 0 0

IMPORTANT

To enable the system to mount a share automatically, you must store the user name,
password, and domain name in a credentials file. For details, see Section 9.2.4,
“Authenticating To an SMB Share Using a Credentials File”.

In the fourth field of the /etc/fstab file, specify mount options, such as the path to the credentials file.
For details, see Section 9.2.6, “Frequently Used Mount Options” and the OPTIONS section in the
mount.cifs(8) man page.

To verify that the share mounts successfully, enter:

# mount /mnt/

9.2.4. Authenticating To an SMB Share Using a Credentials File


In certain situations, administrators want to mount a share without entering the user name and password.
To implement this, create a credentials file. For example:

Procedure 9.1. Creating a Credentials File

1. Create a file, such as ~/smb.cred, and specify the user name, password, and domain name
that file:

username=user_name
password=password
domain=domain_name

2. Set the permissions to only allow the owner to access the file:

# chown user_name ~/smb.cred


# chmod 600 ~/smb.cred

You can now pass the credentials=file_name mount option to the mount utility or use it in the
/etc/fstab file to mount the share without being prompted for the user name and password.

9.2.5. Performing a Multi-user SMB Mount


The credentials you provide to mount a share determine the access permissions on the mount point by
default. For example, if you use the DOMAIN\example user when you mount a share, all operations on
the share will be executed as this user, regardless which local user performs the operation.

However, in certain situations, the administrator wants to mount a share automatically when the system
boots, but users should perform actions on the share's content using their own credentials. The
multiuser mount options lets you configure this scenario.

91
Storage Administration Guide

IMPORTANT

To use multiuser, you must additionally set the sec=security_type mount option to
a security type which supports providing credentials in a non-interactive way, such as
krb5 or the ntlmssp option with a credentials file. See the section called “Accessing a
Share as a User”.

The root user mounts the share using the multiuser option and an account that has minimal access
to the contents of the share. Regular users can then provide their user name and password to the current
session's kernel keyring using the cifscreds utility. If the user accesses the content of the mounted
share, the kernel uses the credentials from the kernel keyring instead of the one initially used to mount
the share.

Mounting a Share with the multiuser Option

To mount a share automatically with the multiuser option when the system boots:

Procedure 9.2. Creating an /etc/fstab File Entry with the multiuser Option

1. Create the entry for the share in the /etc/fstab file. For example:

//server_name/share_name /mnt cifs


multiuser,sec=ntlmssp,credentials=/root/smb.cred 0 0

2. Mount the share:

# mount /mnt/

If you do not want to mount the share automatically when the system boots, mount it manually by
passing -o multiuser,sec=security_type to the mount command. For details about mounting an
SMB share manually, see Section 9.2.2, “Manually Mounting an SMB Share”.

Verifying if an SMB Share is Mounted with themultiuser Option

To verify if a share is mounted with the multiuser option:

# mount
...
//server_name/share_name on /mnt type cifs (sec=ntlmssp,multiuser,...)

Accessing a Share as a User

If an SMB share is mounted with the multiuser option, users can provide their credentials for the
server to the kernel's keyring:

# cifscreds add -u SMB_user_name server_name


Password: ********

Now, when the user performs operations in the directory that contains the mounted SMB share, the
server applies the file system permissions for this user, instead of the one initially used when the share
was mounted.

92
CHAPTER 9. SERVER MESSAGE BLOCK (SMB)

NOTE

Multiple users can perform operations using their own credentials on the mounted share
at the same time.

9.2.6. Frequently Used Mount Options


When you mount an SMB share, the mount options determine:

How the connection will be established with the server. For example, which SMB protocol
version is used when connecting to the server.

How the share will be mounted into the local file system. For example, if the system overrides
the remote file and directory permissions to enable multiple local users to access the content on
the server.

To set multiple options in the fourth field of the /etc/fstab file or in the -o parameter of a mount
command, separate them with commas. For example, see Procedure 9.2, “Creating an /etc/fstab
File Entry with the multiuser Option”.

The following list gives an overview of frequently used mount options:

Table 9.1. Frequently Used Mount Options

Option Description

credentials=file_name Sets the path to the credentials file. See Section 9.2.4, “Authenticating To an
SMB Share Using a Credentials File”.

dir_mode=mode Sets the directory mode if the server does not support CIFS UNIX
extensions.

file_mode=mode Sets the file mode if the server does not support CIFS UNIX extensions.

password=password Sets the password used to authenticate to the SMB server. Alternatively,
specify a credentials file using the credentials option.

seal Enables encryption support for connections using SMB 3.0 or a later
protocol version. Therefore, use seal together with the vers mount option
set to 3.0 or later. See Example 9.1, “Mounting a Share Using an
Encrypted SMB 3.0 Connection”.

sec=security_mode Sets the security mode, such as ntlmsspi, to enable NTLMv2 password
hashing and enabled packet signing. For a list of supported values, see the
option's description in the mount.cifs(8) man page.

If the server does not support the ntlmv2 security mode, use
sec=ntlmssp , which is the default. For security reasons, do not use the
insecure ntlm security mode.

username=user_name Sets the user name used to authenticate to the SMB server. Alternatively,
specify a credentials file using the credentials option.

93
Storage Administration Guide

Option Description

vers=SMB_protocol_version Sets the SMB protocol version used for the communication with the server.

For a complete list, see the OPTIONS section in the mount.cifs(8) man page.

94
CHAPTER 10. FS-CACHE

CHAPTER 10. FS-CACHE


FS-Cache is a persistent local cache that can be used by file systems to take data retrieved from over
the network and cache it on local disk. This helps minimize network traffic for users accessing data from
a file system mounted over the network (for example, NFS).

The following diagram is a high-level illustration of how FS-Cache works:

Figure 10.1. FS-Cache Overview

FS-Cache is designed to be as transparent as possible to the users and administrators of a system.


Unlike cachefs on Solaris, FS-Cache allows a file system on a server to interact directly with a client's
local cache without creating an overmounted file system. With NFS, a mount option instructs the client to
mount the NFS share with FS-cache enabled.

FS-Cache does not alter the basic operation of a file system that works over the network - it merely
provides that file system with a persistent place in which it can cache data. For instance, a client can still
mount an NFS share whether or not FS-Cache is enabled. In addition, cached NFS can handle files that
won't fit into the cache (whether individually or collectively) as files can be partially cached and do not
have to be read completely up front. FS-Cache also hides all I/O errors that occur in the cache from the
client file system driver.

95
Storage Administration Guide

To provide caching services, FS-Cache needs a cache back end. A cache back end is a storage driver
configured to provide caching services (i.e. cachefiles). In this case, FS-Cache requires a mounted
block-based file system that supports bmap and extended attributes (e.g. ext3) as its cache back end.

FS-Cache cannot arbitrarily cache any file system, whether through the network or otherwise: the shared
file system's driver must be altered to allow interaction with FS-Cache, data storage/retrieval, and
metadata setup and validation. FS-Cache needs indexing keys and coherency data from the cached file
system to support persistence: indexing keys to match file system objects to cache objects, and
coherency data to determine whether the cache objects are still valid.

NOTE

In Red Hat Enterprise Linux 7, the cachefilesd package is not installed by default and
needs to be installed manually.

10.1. PERFORMANCE GUARANTEE


FS-Cache does not guarantee increased performance, however it ensures consistent performance by
avoiding network congestion. Using a cache back end incurs a performance penalty: for example,
cached NFS shares add disk accesses to cross-network lookups. While FS-Cache tries to be as
asynchronous as possible, there are synchronous paths (e.g. reads) where this isn't possible.

For example, using FS-Cache to cache an NFS share between two computers over an otherwise
unladen GigE network will not demonstrate any performance improvements on file access. Rather, NFS
requests would be satisfied faster from server memory rather than from local disk.

The use of FS-Cache, therefore, is a compromise between various factors. If FS-Cache is being used to
cache NFS traffic, for instance, it may slow the client down a little, but massively reduce the network and
server loading by satisfying read requests locally without consuming network bandwidth.

10.2. SETTING UP A CACHE


Currently, Red Hat Enterprise Linux 7 only provides the cachefiles caching back end. The
cachefilesd daemon initiates and manages cachefiles. The /etc/cachefilesd.conf file
controls how cachefiles provides caching services.

The first setting to configure in a cache back end is which directory to use as a cache. To configure this,
use the following parameter:

$ dir /path/to/cache

Typically, the cache back end directory is set in /etc/cachefilesd.conf as


/var/cache/fscache, as in:

$ dir /var/cache/fscache

If you want to change the cache back end directory, the selinux context must be same as
/var/cache/fscache:

# semanage fcontext -a -e /var/cache/fscache /path/to/cache


# restorecon -Rv /path/to/cache

Replace /path/to/cache with the directory name while setting up cache.

96
CHAPTER 10. FS-CACHE

NOTE

If the given commands for setting selinux context did not work, use the following
commands:

# semanage permissive -a cachefilesd_t


# semanage permissive -a cachefiles_kernel_t

FS-Cache will store the cache in the file system that hosts /path/to/cache. On a laptop, it is
advisable to use the root file system (/) as the host file system, but for a desktop machine it would be
more prudent to mount a disk partition specifically for the cache.

File systems that support functionalities required by FS-Cache cache back end include the Red Hat
Enterprise Linux 7 implementations of the following file systems:

ext3 (with extended attributes enabled)

ext4

Btrfs

XFS

The host file system must support user-defined extended attributes; FS-Cache uses these attributes to
store coherency maintenance information. To enable user-defined extended attributes for ext3 file
systems (i.e. device), use:

# tune2fs -o user_xattr /dev/device

Alternatively, extended attributes for a file system can be enabled at mount time, as in:

# mount /dev/device /path/to/cache -o user_xattr

The cache back end works by maintaining a certain amount of free space on the partition hosting the
cache. It grows and shrinks the cache in response to other elements of the system using up free space,
making it safe to use on the root file system (for example, on a laptop). FS-Cache sets defaults on this
behavior, which can be configured via cache cull limits. For more information about configuring cache
cull limits, refer to Section 10.4, “Setting Cache Cull Limits”.

Once the configuration file is in place, start up the cachefilesd service:

# systemctl start cachefilesd

To configure cachefilesd to start at boot time, execute the following command as root:

# systemctl enable cachefilesd

10.3. USING THE CACHE WITH NFS


NFS will not use the cache unless explicitly instructed. To configure an NFS mount to use FS-Cache,
include the -o fsc option to the mount command:

97
Storage Administration Guide

# mount nfs-share:/ /mount/point -o fsc

All access to files under /mount/point will go through the cache, unless the file is opened for direct I/O
or writing. For more information, see Section 10.3.2, “Cache Limitations with NFS”. NFS indexes cache
contents using NFS file handle, not the file name, which means hard-linked files share the cache
correctly.

Caching is supported in version 2, 3, and 4 of NFS. However, each version uses different branches for
caching.

10.3.1. Cache Sharing


There are several potential issues to do with NFS cache sharing. Because the cache is persistent, blocks
of data in the cache are indexed on a sequence of four keys:

Level 1: Server details

Level 2: Some mount options; security type; FSID; uniquifier

Level 3: File Handle

Level 4: Page number in file

To avoid coherency management problems between superblocks, all NFS superblocks that wish to
cache data have unique Level 2 keys. Normally, two NFS mounts with same source volume and options
share a superblock, and thus share the caching, even if they mount different directories within that
volume.

Example 10.1. Cache Sharing

Take the following two mount commands:

mount home0:/disk0/fred /home/fred -o fsc

mount home0:/disk0/jim /home/jim -o fsc

Here, /home/fred and /home/jim likely share the superblock as they have the same options,
especially if they come from the same volume/partition on the NFS server (home0). Now, consider
the next two subsequent mount commands:

mount home0:/disk0/fred /home/fred -o fsc,rsize=230

mount home0:/disk0/jim /home/jim -o fsc,rsize=231

In this case, /home/fred and /home/jim will not share the superblock as they have different
network access parameters, which are part of the Level 2 key. The same goes for the following mount
sequence:

mount home0:/disk0/fred /home/fred1 -o fsc,rsize=230

mount home0:/disk0/fred /home/fred2 -o fsc,rsize=231

Here, the contents of the two subtrees (/home/fred1 and /home/fred2) will be cached twice.

98
CHAPTER 10. FS-CACHE

Another way to avoid superblock sharing is to suppress it explicitly with the nosharecache
parameter. Using the same example:

mount home0:/disk0/fred /home/fred -o nosharecache,fsc

mount home0:/disk0/jim /home/jim -o nosharecache,fsc

However, in this case only one of the superblocks is permitted to use cache since there is nothing to
distinguish the Level 2 keys of home0:/disk0/fred and home0:/disk0/jim. To address this,
add a unique identifier on at least one of the mounts, i.e. fsc=unique-identifier. For example:

mount home0:/disk0/fred /home/fred -o nosharecache,fsc

mount home0:/disk0/jim /home/jim -o nosharecache,fsc=jim

Here, the unique identifier jim is added to the Level 2 key used in the cache for /home/jim.

10.3.2. Cache Limitations with NFS


Opening a file from a shared file system for direct I/O automatically bypasses the cache. This is
because this type of access must be direct to the server.

Opening a file from a shared file system for writing will not work on NFS version 2 and 3. The
protocols of these versions do not provide sufficient coherency management information for the
client to detect a concurrent write to the same file from another client.

Opening a file from a shared file system for either direct I/O or writing flushes the cached copy of
the file. FS-Cache will not cache the file again until it is no longer opened for direct I/O or writing.

Furthermore, this release of FS-Cache only caches regular NFS files. FS-Cache will not cache
directories, symlinks, device files, FIFOs and sockets.

10.4. SETTING CACHE CULL LIMITS


The cachefilesd daemon works by caching remote data from shared file systems to free space on the
disk. This could potentially consume all available free space, which could be bad if the disk also housed
the root partition. To control this, cachefilesd tries to maintain a certain amount of free space by
discarding old objects (i.e. accessed less recently) from the cache. This behavior is known as cache
culling.

Cache culling is done on the basis of the percentage of blocks and the percentage of files available in the
underlying file system. There are six limits controlled by settings in /etc/cachefilesd.conf:

brun N% (percentage of blocks) , frun N% (percentage of files)


If the amount of free space and the number of available files in the cache rises above both these
limits, then culling is turned off.

bcull N% (percentage of blocks), fcull N% (percentage of files)


If the amount of available space or the number of files in the cache falls below either of these limits,
then culling is started.

bstop N% (percentage of blocks), fstop N% (percentage of files)

99
Storage Administration Guide

If the amount of available space or the number of available files in the cache falls below either of
these limits, then no further allocation of disk space or files is permitted until culling has raised things
above these limits again.

The default value of N for each setting is as follows:

brun/frun - 10%

bcull/fcull - 7%

bstop/fstop - 3%

When configuring these settings, the following must hold true:

0 ≤ bstop < bcull < brun < 100

0 ≤ fstop < fcull < frun < 100

These are the percentages of available space and available files and do not appear as 100 minus the
percentage displayed by the df program.

IMPORTANT

Culling depends on both bxxx and fxxx pairs simultaneously; they can not be treated
separately.

10.5. STATISTICAL INFORMATION


FS-Cache also keeps track of general statistical information. To view this information, use:

# cat /proc/fs/fscache/stats

FS-Cache statistics includes information on decision points and object counters. For more information,
see the following kernel document:

/usr/share/doc/kernel-
doc-version/Documentation/filesystems/caching/fscache.txt

10.6. FS-CACHE REFERENCES


For more information on cachefilesd and how to configure it, see man cachefilesd and man
cachefilesd.conf. The following kernel documents also provide additional information:

/usr/share/doc/cachefilesd-version-number/README

/usr/share/man/man5/cachefilesd.conf.5.gz

/usr/share/man/man8/cachefilesd.8.gz

For general information about FS-Cache, including details on its design constraints, available statistics,
and capabilities, see the following kernel document: /usr/share/doc/kernel-
doc-version/Documentation/filesystems/caching/fscache.txt

100
PART II. STORAGE ADMINISTRATION

PART II. STORAGE ADMINISTRATION


The Storage Administration section starts with storage considerations for Red Hat Enterprise Linux 7.
Instructions regarding partitions, logical volume management, and swap partitions follow this. Disk
Quotas, RAID systems are next, followed by the functions of mount command, volume_key, and acls.
SSD tuning, write barriers, I/O limits and diskless systems follow this. The large chapter of Online
Storage is next, and finally device mapper multipathing and virtual storage to finish.

101
Storage Administration Guide

CHAPTER 11. STORAGE CONSIDERATIONS DURING


INSTALLATION
Many storage device and file system settings can only be configured at install time. Other settings, such
as file system type, can only be modified up to a certain point without requiring a reformat. As such, it is
prudent that you plan your storage configuration accordingly before installing Red Hat
Enterprise Linux 7.

This chapter discusses several considerations when planning a storage configuration for your system.
For installation instructions (including storage configuration during installation), see the Installation Guide
provided by Red Hat.

For information on what Red Hat officially supports with regards to size and storage limits, see the article
http://www.redhat.com/resourcelibrary/articles/articles-red-hat-enterprise-linux-6-technology-capabilities-
and-limits.

11.1. SPECIAL CONSIDERATIONS


This section enumerates several issues and factors to consider for specific storage configurations.

Separate Partitions for /home, /opt, /usr/local


If it is likely that you will upgrade your system in the future, place /home, /opt, and /usr/local on a
separate device. This allows you to reformat the devices or file systems containing the operating system
while preserving your user and application data.

DASD and zFCP Devices on IBM System Z


On the IBM System Z platform, DASD and zFCP devices are configured via the Channel Command
Word (CCW) mechanism. CCW paths must be explicitly added to the system and then brought online.
For DASD devices, this means listing the device numbers (or device number ranges) as the DASD=
parameter at the boot command line or in a CMS configuration file.

For zFCP devices, you must list the device number, logical unit number (LUN), and world wide port name
(WWPN). Once the zFCP device is initialized, it is mapped to a CCW path. The FCP_x= lines on the boot
command line (or in a CMS configuration file) allow you to specify this information for the installer.

Encrypting Block Devices Using LUKS


Formatting a block device for encryption using LUKS/dm-crypt destroys any existing formatting on that
device. As such, you should decide which devices to encrypt (if any) before the new system's storage
configuration is activated as part of the installation process.

Stale BIOS RAID Metadata


Moving a disk from a system configured for firmware RAID without removing the RAID metadata from the
disk can prevent Anaconda from correctly detecting the disk.

102
CHAPTER 11. STORAGE CONSIDERATIONS DURING INSTALLATION


WARNING

Removing/deleting RAID metadata from disk could potentially destroy any stored
data. Red Hat recommends that you back up your data before proceeding.

To delete RAID metadata from the disk, use the following command:

dmraid -r -E /device/

For more information about managing RAID devices, see man dmraid and Chapter 18, Redundant
Array of Independent Disks (RAID).

iSCSI Detection and Configuration


For plug and play detection of iSCSI drives, configure them in the firmware of an iBFT boot-capable
network interface card (NIC). CHAP authentication of iSCSI targets is supported during installation.
However, iSNS discovery is not supported during installation.

FCoE Detection and Configuration


For plug and play detection of Fibre Channel over Ethernet (FCoE) drives, configure them in the firmware
of an EDD boot-capable NIC.

DASD
Direct-access storage devices (DASD) cannot be added or configured during installation. Such devices
are specified in the CMS configuration file.

Block Devices with DIF/DIX Enabled


DIF/DIX is a hardware checksum feature provided by certain SCSI host bus adapters and block devices.
When DIF/DIX is enabled, error occurs if the block device is used as a general-purpose block device.
Buffered I/O or mmap(2)-based I/O will not work reliably, as there are no interlocks in the buffered write
path to prevent buffered data from being overwritten after the DIF/DIX checksum has been calculated.

This causes the I/O to later fail with a checksum error. This problem is common to all block device (or file
system-based) buffered I/O or mmap(2) I/O, so it is not possible to work around these errors caused by
overwrites.

As such, block devices with DIF/DIX enabled should only be used with applications that use O_DIRECT.
Such applications should use the raw block device. Alternatively, it is also safe to use the XFS file
system on a DIF/DIX enabled block device, as long as only O_DIRECT I/O is issued through the file
system. XFS is the only file system that does not fall back to buffered I/O when doing certain allocation
operations.

The responsibility for ensuring that the I/O data does not change after the DIF/DIX checksum has been
computed always lies with the application, so only applications designed for use with O_DIRECT I/O and
DIF/DIX hardware should use DIF/DIX.

103
Storage Administration Guide

CHAPTER 12. FILE SYSTEM CHECK


File systems may be checked for consistency, and optionally repaired, with file system-specific
userspace tools. These tools are often referred to as fsck tools, where fsck is a shortened version of
file system check.

NOTE

These file system checkers only guarantee metadata consistency across the file system;
they have no awareness of the actual data contained within the file system and are not
data recovery tools.

File system inconsistencies can occur for various reasons, including but not limited to hardware errors,
storage administration errors, and software bugs.

Before modern metadata-journaling file systems became common, a file system check was required any
time a system crashed or lost power. This was because a file system update could have been
interrupted, leading to an inconsistent state. As a result, a file system check is traditionally run on each
file system listed in /etc/fstab at boot-time. For journaling file systems, this is usually a very short
operation, because the file system's metadata journaling ensures consistency even after a crash.

However, there are times when a file system inconsistency or corruption may occur, even for journaling
file systems. When this happens, the file system checker must be used to repair the file system. The
following provides best practices and other useful information when performing this procedure.

IMPORTANT

Red Hat does not recommended this unless the machine does not boot, the file system is
extremely large, or the file system is on remote storage. It is possible to disable file
system check at boot by setting the sixth field in /etc/fstab to 0.

12.1. BEST PRACTICES FOR FSCK


Generally, running the file system check and repair tool can be expected to automatically repair at least
some of the inconsistencies it finds. In some cases, severely damaged inodes or directories may be
discarded if they cannot be repaired. Significant changes to the file system may occur. To ensure that
unexpected or undesirable changes are not permanently made, perform the following precautionary
steps:

Dry run
Most file system checkers have a mode of operation which checks but does not repair the file system.
In this mode, the checker prints any errors that it finds and actions that it would have taken, without
actually modifying the file system.

NOTE

Later phases of consistency checking may print extra errors as it discovers


inconsistencies which would have been fixed in early phases if it were running in
repair mode.

Operate first on a file system image


Most file systems support the creation of a metadata image, a sparse copy of the file system which

104
CHAPTER 12. FILE SYSTEM CHECK

contains only metadata. Because file system checkers operate only on metadata, such an image can
be used to perform a dry run of an actual file system repair, to evaluate what changes would actually
be made. If the changes are acceptable, the repair can then be performed on the file system itself.

NOTE

Severely damaged file systems may cause problems with metadata image creation.

Save a file system image for support investigations


A pre-repair file system metadata image can often be useful for support investigations if there is a
possibility that the corruption was due to a software bug. Patterns of corruption present in the pre-
repair image may aid in root-cause analysis.

Operate only on unmounted file systems


A file system repair must be run only on unmounted file systems. The tool must have sole access to
the file system or further damage may result. Most file system tools enforce this requirement in repair
mode, although some only support check-only mode on a mounted file system. If check-only mode is
run on a mounted file system, it may find spurious errors that would not be found when run on an
unmounted file system.

Disk errors
File system check tools cannot repair hardware problems. A file system must be fully readable and
writable if repair is to operate successfully. If a file system was corrupted due to a hardware error, the
file system must first be moved to a good disk, for example with the dd(8) utility.

12.2. FILE SYSTEM-SPECIFIC INFORMATION FOR FSCK

12.2.1. ext2, ext3, and ext4


All of these file sytems use the e2fsck binary to perform file system checks and repairs. The file names
fsck.ext2, fsck.ext3, and fsck.ext4 are hardlinks to this same binary. These binaries are run
automatically at boot time and their behavior differs based on the file system being checked and the state
of the file system.

A full file system check and repair is invoked for ext2, which is not a metadata journaling file system, and
for ext4 file systems without a journal.

For ext3 and ext4 file systems with metadata journaling, the journal is replayed in userspace and the
binary exits. This is the default action as journal replay ensures a consistent file system after a crash.

If these file systems encounter metadata inconsistencies while mounted, they record this fact in the file
system superblock. If e2fsck finds that a file system is marked with such an error, e2fsck performs a
full check after replaying the journal (if present).

e2fsck may ask for user input during the run if the -p option is not specified. The -p option tells
e2fsck to automatically do all repairs that may be done safely. If user intervention is required, e2fsck
indicates the unfixed problem in its output and reflect this status in the exit code.

Commonly used e2fsck run-time options include:

-n

105
Storage Administration Guide

No-modify mode. Check-only operation.

-b superblock
Specify block number of an alternate suprerblock if the primary one is damaged.

-f
Force full check even if the superblock has no recorded errors.

-j journal-dev
Specify the external journal device, if any.

-p
Automatically repair or "preen" the file system with no user input.

-y
Assume an answer of "yes" to all questions.

All options for e2fsck are specified in the e2fsck(8) manual page.

The following five basic phases are performed by e2fsck while running:

1. Inode, block, and size checks.

2. Directory structure checks.

3. Directory connectivity checks.

4. Reference count checks.

5. Group summary info checks.

The e2image(8) utility can be used to create a metadata image prior to repair for diagnostic or testing
purposes. The -r option should be used for testing purposes in order to create a sparse file of the same
size as the file system itself. e2fsck can then operate directly on the resulting file. The -Q option should
be specified if the image is to be archived or provided for diagnostic. This creates a more compact file
format suitable for transfer.

12.2.2. XFS
No repair is performed automatically at boot time. To initiate a file system check or repair, use the
xfs_repair tool.

NOTE

Although an fsck.xfs binary is present in the xfsprogs package, this is present only to
satisfy initscripts that look for an fsck.file system binary at boot time. fsck.xfs
immediately exits with an exit code of 0.

Older xfsprogs packages contain an xfs_check tool. This tool is very slow and does not
scale well for large file systems. As such, it has been deprecated in favor of xfs_repair
-n.

106
CHAPTER 12. FILE SYSTEM CHECK

A clean log on a file system is required for xfs_repair to operate. If the file system was not cleanly
unmounted, it should be mounted and unmounted prior to using xfs_repair. If the log is corrupt and
cannot be replayed, the -L option may be used to zero the log.

IMPORTANT

The -L option must only be used if the log cannot be replayed. The option discards all
metadata updates in the log and results in further inconsistencies.

It is possible to run xfs_repair in a dry run, check-only mode by using the -n option. No changes will
be made to the file system when this option is specified.

xfs_repair takes very few options. Commonly used options include:

-n
No modify mode. Check-only operation.

-L
Zero metadata log. Use only if log cannot be replayed with mount.

-m maxmem
Limit memory used during run to maxmem MB. 0 can be specified to obtain a rough estimate of the
minimum memory required.

-l logdev
Specify the external log device, if present.

All options for xfs_repair are specified in the xfs_repair(8) manual page.

The following eight basic phases are performed by xfs_repair while running:

1. Inode and inode blockmap (addressing) checks.

2. Inode allocation map checks.

3. Inode size checks.

4. Directory checks.

5. Pathname checks.

6. Link count checks.

7. Freemap checks.

8. Super block checks.

For more information, see the xfs_repair(8) manual page.

xfs_repair is not interactive. All operations are performed automatically with no input from the user.

107
Storage Administration Guide

If it is desired to create a metadata image prior to repair for diagnostic or testing purposes, the
xfs_metadump(8) and xfs_mdrestore(8) utilities may be used.

12.2.3. Btrfs

NOTE

Btrfs is available as a Technology Preview feature in Red Hat Enterprise Linux 7 but has
been deprecated since the Red Hat Enterprise Linux 7.4 release. It will be removed in a
future major release of Red Hat Enterprise Linux.

For more information, see Deprecated Functionality in the Red Hat Enterprise Linux 7.4
Release Notes.

The btrfsck tool is used to check and repair btrfs file systems. This tool is still in early development
and may not detect or repair all types of file system corruption.

By default, btrfsck does not make changes to the file system; that is, it runs check-only mode by
default. If repairs are desired the --repair option must be specified.

The following three basic phases are performed by btrfsck while running:

1. Extent checks.

2. File system root checks.

3. Root reference count checks.

The btrfs-image(8) utility can be used to create a metadata image prior to repair for diagnostic or
testing purposes.

108
CHAPTER 13. PARTITIONS

CHAPTER 13. PARTITIONS


With the parted utility, you can:

View the existing partition table.

Change the size of existing partitions.

Add partitions from free space or additional hard drives.

The parted package is installed by default on Red Hat Enterprise Linux 7. To start parted, log in as root
and enter the following command:

# parted /dev/sda

Replace /dev/sda with the device name for the drive to configure.

Manipulating Partitions on Devices in Use


For a device to not be in use, none of the partitions on the device can be mounted, and no swap space
on the device can be enabled.

If you want to remove or resize a partition, the device on which that partition resides must not be in use.

It is possible to create a new partition on a device that is in use, but this is not recommended.

Modifying the Partition Table


Modifying the partition table while another partition on the same disk is in use is generally not
recommended because the kernel is not able to reread the partition table. As a consequence, changes
are not applied to a running system. In the described situation, reboot the system, or use the following
command to make the system register new or modified partitions:

# partx --update --nr partition-number disk

The easiest way to modify disks that are currently in use is:

1. Boot the system in rescue mode if the partitions on the disk are impossible to unmount, for
example in the case of a system disk.

2. When prompted to mount the file system, select Skip.

If the drive does not contain any partitions in use, that is there are no system processes that use or lock
the file system from being unmounted, you can unmount the partitions with the umount command and
turn off all the swap space on the hard drive with the swapoff command.

To see commonly used parted commands, see Table 13.1, “parted Commands”.

IMPORTANT

Do not use the parted utility to create file systems. Use the mkfs tool instead.

Table 13.1. parted Commands

109
Storage Administration Guide

Command Description

help Display list of available commands

mklabel label Create a disk label for the partition table

mkpart part-type [fs-type] start-mb Make a partition without creating a new file system
end-mb

name minor-num name Name the partition for Mac and PC98 disklabels only

print Display the partition table

quit Quit parted

rescue start-mb end-mb Rescue a lost partition from start-mb to end-mb

rm minor-num Remove the partition

select device Select a different device to configure

set minor-num flag state Set the flag on a partition; state is either on or off

toggle [NUMBER [FLAG] Toggle the state of FLAG on partition NUMBER

unit UNIT Set the default unit to UNIT

13.1. VIEWING THE PARTITION TABLE


To view the partition table:

1. Start parted.

2. Use the following command to view the partition table:

(parted) print

A table similar to the following one appears:

Example 13.1. Partition Table

Model: ATA ST3160812AS (scsi)


Disk /dev/sda: 160GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos

Number Start End Size Type File system Flags


1 32.3kB 107MB 107MB primary ext3 boot
2 107MB 105GB 105GB primary ext3

110
CHAPTER 13. PARTITIONS

3 105GB 107GB 2147MB primary linux-swap


4 107GB 160GB 52.9GB extended root
5 107GB 133GB 26.2GB logical ext3
6 133GB 133GB 107MB logical ext3
7 133GB 160GB 26.6GB logical lvm

Following is the description of the partition table:

Model: ATA ST3160812AS (scsi): explains the disk type, manufacturer, model number, and
interface.

Disk /dev/sda: 160GB: displays the disk label type.

In the partition table, Number is the partition number. For example, the partition with minor
number 1 corresponds to /dev/sda1. The Start and End values are in megabytes. Valid
Types are metadata, free, primary, extended, or logical. The File system is the file system
type. The Flags column lists the flags set for the partition. Available flags are boot, root, swap,
hidden, raid, lvm, or lba.

The File system in the partition table can be any of the following:

ext2

ext3

fat16

fat32

hfs

jfs

linux-swap

ntfs

reiserfs

hp-ufs

sun-ufs

xfs

If a File system of a device shows no value, this means that its file system type is unknown.

111
Storage Administration Guide

NOTE

To select a different device without having to restart parted, use the following command
and replace /dev/sda with the device you want to select:

(parted) select /dev/sda

It allows you to view or configure the partition table of a device.

13.2. CREATING A PARTITION


WARNING

Do not attempt to create a partition on a device that is in use.

Procedure 13.1. Creating a Partition

1. Before creating a partition, boot into rescue mode, or unmount any partitions on the device and
turn off any swap space on the device.

2. Start parted:

# parted /dev/sda

Replace /dev/sda with the device name on which you want to create the partition.

3. View the current partition table to determine if there is enough free space:

(parted) print

If there is not enough free space, you can resize an existing partition. For more information, see
Section 13.5, “Resizing a Partition with fdisk”.

From the partition table, determine the start and end points of the new partition and what partition
type it should be. You can only have four primary partitions, with no extended partition, on a
device. If you need more than four partitions, you can have three primary partitions, one
extended partition, and multiple logical partitions within the extended. For an overview of disk
partitions, see the appendix An Introduction to Disk Partitions in the Red Hat Enterprise Linux 7
Installation Guide.

4. To create partition:

(parted) mkpart part-type name fs-type start end

Replace part-type with with primary, logical, or extended as per your requirement.

Replace name with partition-name; name is required for GPT partition tables.

112
CHAPTER 13. PARTITIONS

Replace fs-type with any one of btrfs, ext2, ext3, ext4, fat16, fat32, hfs, hfs+, linux-swap, ntfs,
reiserfs, or xfs; fs-type is optional.

Replace start end with the size in megabytes as per your requirement.

For example, to create a primary partition with an ext3 file system from 1024 megabytes until
2048 megabytes on a hard drive, type the following command:

(parted) mkpart primary 1024 2048

NOTE

If you use the mkpartfs command instead, the file system is created after the
partition is created. However, parted does not support creating an ext3 file
system. Thus, if you wish to create an ext3 file system, use mkpart and create
the file system with the mkfs command as described later.

The changes start taking place as soon as you press Enter, so review the command before
executing to it.

5. View the partition table to confirm that the created partition is in the partition table with the
correct partition type, file system type, and size using the following command:

(parted) print

Also remember the minor number of the new partition so that you can label any file systems on
it.

6. Exit the parted shell:

(parted) quit

7. Use the following command after parted is closed to make sure the kernel recognizes the new
partition:

# cat /proc/partitions

The maximum number of partitions parted can create is 128. While the GUID Partition Table (GPT)
specification allows for more partitions by growing the area reserved for the partition table, common
practice used by parted is to limit it to enough area for 128 partitions.

13.2.1. Formatting and Labeling the Partition


To format and label the partition use the following procedure:

Procedure 13.2. Format and Label the Partition

1. The partition does not have a file system. To create the ext4 file system, use:

# mkfs.ext4 /dev/sda6

113
Storage Administration Guide


WARNING

Formatting the partition permanently destroys any data that currently exists
on the partition.

2. Label the file system on the partition. For example, if the file system on the new partition is
/dev/sda6 and you want to label it Work, use:

# e2label /dev/sda6 "Work"

By default, the installation program uses the mount point of the partition as the label to make
sure the label is unique. You can use any label you want.

3. Create a mount point (e.g. /work) as root.

13.2.2. Add the Partition to /etc/fstab

1. As root, edit the /etc/fstab file to include the new partition using the partition's UUID.

Use the command blkid -o list for a complete list of the partition's UUID, or blkid
device for individual device details.

In /etc/fstab:

The first column should contain UUID= followed by the file system's UUID.

The second column should contain the mount point for the new partition.

The third column should be the file system type: for example, ext4 or swap.

The fourth column lists mount options for the file system. The word defaults here means
that the partition is mounted at boot time with default options.

The fifth and sixth field specify backup and check options. Example values for a non-root
partition are 0 2.

2. Regenerate mount units so that your system registers the new configuration:

# systemctl daemon-reload

3. Try mounting the file system to verify that the configuration works:

# mount /work

Additional Information

If you need more information about the format of /etc/fstab, see the fstab(5) man page.

114
CHAPTER 13. PARTITIONS

13.3. REMOVING A PARTITION


WARNING

Do not attempt to remove a partition on a device that is in use.

Procedure 13.3. Remove a Partition

1. Before removing a partition, do one of the following:

Boot into rescue mode, or

Unmount any partitions on the device and turn off any swap space on the device.

2. Start the parted utility:

# parted device

Replace device with the device on which to remove the partition: for example, /dev/sda.

3. View the current partition table to determine the minor number of the partition to remove:

(parted) print

4. Remove the partition with the command rm. For example, to remove the partition with minor
number 3:

(parted) rm 3

The changes start taking place as soon as you press Enter, so review the command before
committing to it.

5. After removing the partition, use the print command to confirm that it is removed from the
partition table:

(parted) print

6. Exit from the parted shell:

(parted) quit

7. Examine the content of the /proc/partitions file to make sure the kernel knows the partition
is removed:

# cat /proc/partitions

115
Storage Administration Guide

8. Remove the partition from the /etc/fstab file. Find the line that declares the removed
partition, and remove it from the file.

9. Regenerate mount units so that your system registers the new /etc/fstab configuration:

# systemctl daemon-reload

13.4. SETTING A PARTITION TYPE


The partition type, not to be confused with the file system type, is used by a running system only rarely.
However, the partition type matters to on-the-fly generators, such as systemd-gpt-auto-generator,
which use the partition type to, for example, automatically identify and mount devices.

You can start the fdisk utility and use the t command to set the partition type. The following example
shows how to change the partition type of the first partition to 0x83, default on Linux:

# fdisk /dev/sdc
Command (m for help): t
Selected partition 1
Partition type (type L to list all types): 83
Changed type of partition 'Linux LVM' to 'Linux'.

The parted utility provides some control of partition types by trying to map the partition type to 'flags',
which is not convenient for end users. The parted utility can handle only certain partition types, for
example LVM or RAID. To remove, for example, the lvm flag from the first partition with parted, use:

# parted /dev/sdc 'set 1 lvm off'

For a list of commonly used partition types and hexadecimal numbers used to represent them, see the
Partition Types table in the Partitions: Turning One Drive Into Many appendix of the Red Hat
Enterprise Linux 7 Installation Guide.

13.5. RESIZING A PARTITION WITH FDISK


The fdisk utility allows you to create and manipulate GPT, MBR, Sun, SGI, and BSD partition tables.
On disks with a GUID Partition Table (GPT), using the parted utility is recommended, as fdisk GPT
support is in an experimental phase.

Before resizing a partition, back up the data stored on the file system and test the procedure, as the only
way to change a partition size using fdisk is by deleting and recreating the partition.

IMPORTANT

The partition you are resizing must be the last partition on a particular disk.

Red Hat only supports extending and resizing LVM partitions.

Procedure 13.4. Resizing a Partition

The following procedure is provided only for reference. To resize a partition using fdisk:

1. Unmount the device:

116
CHAPTER 13. PARTITIONS

# umount /dev/vda

2. Run fdisk disk_name. For example:

# fdisk /dev/vda
Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Command (m for help):

3. Use the p option to determine the line number of the partition to be deleted.

Command (m for help): p


Disk /dev/vda: 16.1 GB, 16106127360 bytes, 31457280 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x0006d09a

Device Boot Start End Blocks Id System


/dev/vda1 * 2048 1026047 512000 83 Linux
/dev/vda2 1026048 31457279 15215616 8e Linux LVM

4. Use the d option to delete a partition. If there is more than one partition available, fdisk
prompts you to provide a number of the partition to delete:

Command (m for help): d


Partition number (1,2, default 2): 2
Partition 2 is deleted

5. Use the n option to create a partition and follow the prompts. Allow enough space for any future
resizing. The fdisk default behavior (press Enter) is to use all space on the device. You can
specify the end of the partition by sectors, or specify a human-readable size by using
+<size><suffix>, for example +500M, or +10G.

Red Hat recommends using the human-readable size specification if you do not want to use all
free space, as fdisk aligns the end of the partition with the physical sectors. If you specify the
size by providing an exact number (in sectors), fdisk does not align the end of the partition.

Command (m for help): n


Partition type:
p primary (1 primary, 0 extended, 3 free)
e extended
Select (default p): *Enter*
Using default response p
Partition number (2-4, default 2): *Enter*
First sector (1026048-31457279, default 1026048): *Enter*
Using default value 1026048

117
Storage Administration Guide

Last sector, +sectors or +size{K,M,G} (1026048-31457279, default


31457279): +500M
Partition 2 of type Linux and of size 500 MiB is set

6. Set the partition type to LVM:

Command (m for help): t


Partition number (1,2, default 2): *Enter*
Hex code (type L to list all codes): 8e
Changed type of partition 'Linux' to 'Linux LVM'

7. Write the changes with the w option when you are sure the changes are correct, as errors can
cause instability with the selected partition.

8. Run e2fsck on the device to check for consistency:

# e2fsck /dev/vda
e2fsck 1.41.12 (17-May-2010)
Pass 1:Checking inodes, blocks, and sizes
Pass 2:Checking directory structure
Pass 3:Checking directory connectivity
Pass 4:Checking reference counts
Pass 5:Checking group summary information
ext4-1:11/131072 files (0.0% non-contiguous),27050/524128 blocks

9. Mount the device:

# mount /dev/vda

For more information, see the fdisk(8) manual page.

118
CHAPTER 14. CREATING AND MAINTAINING SNAPSHOTS WITH SNAPPER

CHAPTER 14. CREATING AND MAINTAINING SNAPSHOTS


WITH SNAPPER
A snapshot volume is a point in time copy of a target volume that provides a way to revert a file system
back to an earlier state. Snapper is a command-line tool to create and maintain snapshots for Btrfs and
thinly-provisioned LVM file systems.

14.1. CREATING INITIAL SNAPPER CONFIGURATION


Snapper requires discrete configuration files for each volume it operates on. You must set up the
configuration files manually. By default, only the root user is allowed to perform snapper commands.

The file system recommended by Red Hat with Snapper depends on your Red Hat Enterprise Linux
version:

In Red Hat Enterprise Linux 7.4 or earlier versions of Red Hat Enterprise Linux 7, use ext4 with
Snapper. Use the XFS file system on lvm-thin volumes only if you are monitoring the amount of
free space in the pool to prevent out-of-space problems that can lead to a failure.

In Red Hat Enterprise Linux 7.5 or later versions, use XFS with Snapper.

Note that the Btrfs tools and file system are provided as a Technology Preview, which make them
unsuitable for production systems.

Although it is possible to allow a user or group other than root to use certain Snapper commands,
Red Hat recommends that you do not add elevated permissions to otherwise unprivileged users or
groups. Such a configuration bypasses SELinux and could pose a security risk. Red Hat recommends
that you review these capabilities with your Security Team and consider using the sudo infrastructure
instead.

NOTE

Btrfs is available as a Technology Preview feature in Red Hat Enterprise Linux 7 but has
been deprecated since the Red Hat Enterprise Linux 7.4 release. It will be removed in a
future major release of Red Hat Enterprise Linux.

For more information, see Deprecated Functionality in the Red Hat Enterprise Linux 7.4
Release Notes.

Procedure 14.1. Creating a Snapper Configuration File

1. Create or choose either:

A thinly-provisioned logical volume with a Red Hat supported file system on top of it, or

A Btrfs subvolume.

2. Mount the file system.

3. Create the configuration file that defines this volume.

For LVM2:

119
Storage Administration Guide

# snapper -c config_name create-config -f "lvm(fs_type)" /mount-


point

For example, to create a configuration file called lvm_config on an LVM2 subvolume with an
ext4 file system, mounted at /lvm_mount, use:

# snapper -c lvm_config create-config -f "lvm(ext4)" /lvm_mount

For Btrfs:

# snapper -c config_name create-config -f btrfs /mount-point

The -c config_name option specifies the name of the configuration file.

The create-config tells snapper to create a configuration file.

The -f file_system tells snapper what file system to use; if this is omitted snapper will
attempt to detect the file system.

The /mount-point is where the subvolume or thinly-provisioned LVM2 file system is


mounted.

Alternatively, to create a configuration file called btrfs_config, on a Btrfs subvolume that is


mounted at /btrfs_mount, use:

# snapper -c btrfs_config create-config -f btrfs /btrfs_mount

The configuration files are stored in the /etc/snapper/configs/ directory.

14.2. CREATING A SNAPPER SNAPSHOT


Snapper can create the following kinds of snapshots:

Pre Snapshot
A pre snapshot serves as a point of origin for a post snapshot. The two are closely tied and designed
to track file system modification between the two points. The pre snapshot must be created before
the post snapshot.

Post Snapshot
A post snapshot serves as the end point to the pre snapshot. The coupled pre and post snapshots
define a range for comparison. By default, every new snapper volume is configured to create a
background comparison after a related post snapshot is created successfully.

Single Snapshot
A single snapshot is a standalone snapshot created at a specific moment. These can be used to
track a timeline of modifications and have a general point to return to later.

14.2.1. Creating a Pre and Post Snapshot Pair

14.2.1.1. Creating a Pre Snapshot with Snapper

120
CHAPTER 14. CREATING AND MAINTAINING SNAPSHOTS WITH SNAPPER

To create a pre snapshot, use:

# snapper -c config_name create -t pre

The -c config_name option creates a snapshot according to the specifications in the named
configuration file. If the configuration file does not yet exist, see Section 14.1, “Creating Initial Snapper
Configuration”.

The create -t option specifies what type of snapshot to create. Accepted entries are pre, post, or
single.

For example, to create a pre snapshot using the lvm_config configuration file, as created in
Section 14.1, “Creating Initial Snapper Configuration”, use:

# snapper -c SnapperExample create -t pre -p


1

The -p option prints the number of the created snapshot and is optional.

14.2.1.2. Creating a Post Snapshot with Snapper

A post snapshot is the end point of the snapshot and should be created after the parent pre snapshot by
following the instructions in Section 14.2.1.1, “Creating a Pre Snapshot with Snapper”.

Procedure 14.2. Creating a Post Snapshot

1. Determine the number of the pre snapshot:

# snapper -c config_name list

For example, to display the list of snapshots created using the configuration file lvm_config,
use the following:

# snapper -c lvm_config list


Type | # | Pre # | Date | User | Cleanup |
Description | Userdata
-------+---+-------+-------------------+------+----------+-------
------+---------
single | 0 | | | root | | current
|
pre | 1 | | Mon 06<...> | root | |
|

This output shows that the pre snapshot is number 1.

2. Create a post snapshot that is linked to a previously created pre snapshot:

# snapper -c config_file create -t post --pre-num


pre_snapshot_number

The -t post option specifies the creation of the post snapshot type.

The --pre-num option specifies the corresponding pre snapshot.

121
Storage Administration Guide

For example, to create a post snapshot using the lvm_config configuration file and is linked to
pre snapshot number 1, use:

# snapper -c lvm_config create -t post --pre-num 1 -p


2

The -p option prints the number of the created snapshot and is optional.

3. The pre and post snapshots 1 and 2 are now created and paired. Verify this with the list
command:

# snapper -c lvm_config list


Type | # | Pre # | Date | User | Cleanup |
Description | Userdata
-------+---+-------+-------------------+------+----------+-------
------+---------
single | 0 | | | root | | current
|
pre | 1 | | Mon 06<...> | root | |
|
post | 2 | 1 | Mon 06<...> | root | |
|

14.2.1.3. Wrapping a Command in Pre and Post Snapshots

You can also wrap a command within a pre and post snapshot, which can be useful when testing. See
Procedure 14.3, “Wrapping a Command in Pre and Post Snapshots”, which is a shortcut for the following
steps:

1. Running the snapper create pre snapshot command.

2. Running a command or a list of commands to perform actions with a possible impact on the file
system content.

3. Running the snapper create post snapshot command.

Procedure 14.3. Wrapping a Command in Pre and Post Snapshots

1. To wrap a command in pre and post snapshots:

# snapper -c lvm_config create --command "command_to_be_tracked"

For example, to track the creation of the /lvm_mount/hello_file file:

# snapper -c lvm_config create --command "echo Hello >


/lvm_mount/hello_file"

2. To verify this, use the status command:

# snapper -c config_file status


first_snapshot_number..second_snapshot_number

For example, to track the changes made in the first step:

122
CHAPTER 14. CREATING AND MAINTAINING SNAPSHOTS WITH SNAPPER

# snapper -c lvm_config status 3..4


+..... /lvm_mount/hello_file

Use the list command to verify the number of the snapshot if needed.

For more information on the status command, see Section 14.3, “Tracking Changes Between
Snapper Snapshots”.

Note that there is no guarantee that the command in the given example is the only thing the snapshots
capture. Snapper also records anything that is modified by the system, not just what a user modifies.

14.2.2. Creating a Single Snapper Snapshot


Creating a single snapper snapshot is similar to creating a pre or post snapshot, only the create -t
option specifies single. The single snapshot is used to create a single snapshot in time without having it
relate to any others. However, if you are interested in a straightforward way to create snapshots of LVM2
thin volumes without the need to automatically generate comparisons or list additional information,
Red Hat recommends using the System Storage Manager instead of Snapper for this purpose, as
described in Section 16.2.6, “Snapshot”.

To create a single snapshot, use:

# snapper -c config_name create -t single

For example, the following command creates a single snapshot using the lvm_config configuration file.

# snapper -c lvm_config create -t single

Although single snapshots are not specifically designed to track changes, you can use the snapper
diff, xadiff, and status commands to compare any two snapshots. For more information on these
commands, see Section 14.3, “Tracking Changes Between Snapper Snapshots” .

14.2.3. Configuring Snapper to Take Automated Snapshots


Taking automated snapshots is one of key features of Snapper. By default, when you configure Snapper
for a volume, Snapper starts taking a snapshot of the volume every hour.

Under the default configuration, Snapper keeps:

10 hourly snapshots, and the final hourly snapshot is saved as a “daily” snapshot.

10 daily snapshots, and the final daily snapshot for a month is saved as a “monthly” snapshot.

10 monthly snapshots, and the final monthly snapshot is saved as a “yearly” snapshot.

10 yearly snapshots.

Note that Snapper keeps by default no more that 50 snapshots in total. However, Snapper keeps by
default all snapshots created less than 1,800 seconds ago.

The default configuration is specified in the /etc/snapper/config-templates/default file. When


you use the snapper create-config command to create a configuration, any unspecified values are
set based on the default configuration. You can edit the configuration for any defined volume in the
/etc/snapper/configs/config_name file.

123
Storage Administration Guide

14.3. TRACKING CHANGES BETWEEN SNAPPER SNAPSHOTS


Use the status, diff, and xadiff commands to track the changes made to a subvolume between
snapshots:

status
The status command shows a list of files and directories that have been created, modified, or
deleted between two snapshots, that is a comprehensive list of changes between two snapshots. You
can use this command to get an overview of the changes without excessive details.

For more information, see Section 14.3.1, “Comparing Changes with the status Command”.

diff
The diff command shows a diff of modified files and directories between two snapshots as received
from the status command if there is at least one modification detected.

For more information, see Section 14.3.2, “Comparing Changes with the diff Command”.

xadiff
The xadiff command compares how the extended attributes of a file or directory have changed
between two snapshots.

For more information, see Section 14.3.3, “Comparing Changes with the xadiff Command”.

14.3.1. Comparing Changes with the status Command

The status command shows a list of files and directories that have been created, modified, or deleted
between two snapshots.

To display the status of files between two snapshots, use:

# snapper -c config_file status


first_snapshot_number..second_snapshot_number

Use the list command to determine snapshot numbers if needed.

For example, the following command displays the changes made between snapshot 1 and 2, using the
configuration file lvm_config.

#snapper -c lvm_config status 1..2


tp.... /lvm_mount/dir1
-..... /lvm_mount/dir1/file_a
c.ug.. /lvm_mount/file2
+..... /lvm_mount/file3
....x. /lvm_mount/file4
cp..xa /lvm_mount/file5

Read letters and dots in the first part of the output as columns:

+..... /lvm_mount/file3
||||||
123456

124
CHAPTER 14. CREATING AND MAINTAINING SNAPSHOTS WITH SNAPPER

Column 1 indicates any modification of the file (directory entry) type. Possible values are:

Column 1

Output Meaning

. Nothing has changed.

+ File created.

- File deleted.

c Content changed.

t The type of directory entry has changed. For


example, a former symbolic link has changed to a
regular file with the same file name.

Column 2 indicates any changes in the file permissions. Possible values are:

Column 2

Output Meaning

. No permissions changed.

p Permissions changed.

Column 3 indicates any changes in the user ownership. Possible values are:

Column 3

Output Meaning

. No user ownership changed.

u User ownership has changed.

Column 4 indicates any changes in the group ownership. Possible values are:

Column 4

125
Storage Administration Guide

Output Meaning

. No group ownership changed.

g Group ownership has changed.

Column 5 indicates any changes in the extended attributes. Possible values are:

Column 5

Output Meaning

. No extended attributes changed.

x Extended attributes changed.

Column 6 indicates any changes in the access control lists (ACLs). Possible values are:

Column 6

Output Meaning

. No ACLs changed.

a ACLs modified.

14.3.2. Comparing Changes with the diff Command

The diff command shows the changes of modified files and directories between two snapshots.

# snapper -c config_name diff


first_snapshot_number..second_snapshot_number

Use the list command to determine the number of the snapshot if needed.

For example, to compare the changes made in files between snapshot 1 and snapshot 2 that were
made using the lvm_config configuration file, use:

# snapper -c lvm_config diff 1..2


--- /lvm_mount/.snapshots/13/snapshot/file4 19<...>
+++ /lvm_mount/.snapshots/14/snapshot/file4 20<...>
@@ -0,0 +1 @@
+words

This output shows that file4 had been modified to add "words" into the file.

14.3.3. Comparing Changes with the xadiff Command

126
CHAPTER 14. CREATING AND MAINTAINING SNAPSHOTS WITH SNAPPER

The xadiff command compares how the extended attributes of a file or directory have changed
between two snapshots:

# snapper -c config_name xadiff


first_snapshot_number..second_snapshot_number

Use the list command to determine the number of the snapshot if needed.

For example, to show the xadiff output between snapshot number 1 and snapshot number 2 that were
made using the lvm_config configuration file, use:

# snapper -c lvm_config xadiff 1..2

14.4. REVERSING CHANGES IN BETWEEN SNAPSHOTS


To reverse changes made between two existing Snapper snapshots, use the undochange command in
the following format, where 1 is the first snapshot and 2 is the second snapshot:

snapper -c config_name undochange 1..2

IMPORTANT

Using the undochange command does not revert the Snapper volume back to its original
state and does not provide data consistency. Any file modification that occurs outside of
the specified range, for example after snapshot 2, will remain unchanged after reverting
back, for example to the state of snapshot 1. For example, if undochange is run to undo
the creation of a user, any files owned by that user can still remain.

There is also no mechanism to ensure file consistency as a snapshot is made, so any


inconsistencies that already exist can be transferred back to the snapshot when the
undochange command is used.

Do not use the Snapper undochange command with the root file system, as doing so is
likely to lead to a failure.

The following diagram demonstrates how the undochange command works:

127
Storage Administration Guide

Figure 14.1. Snapper Status over Time

The diagram shows the point in time in which snapshot_1 is created, file_a is created, then file_b
deleted. Snapshot_2 is then created, after which file_a is edited and file_c is created. This is now
the current state of the system. The current system has an edited version of file_a, no file_b, and a
newly created file_c.

When the undochange command is called, Snapper generates a list of modified files between the first
listed snapshot and the second. In the diagram, if you use the snapper -c SnapperExample
undochange 1..2 command, Snapper creates a list of modified files (that is, file_a is created;
file_b is deleted) and applies them to the current system. Therefore:

the current system will not have file_a, as it has yet to be created when snapshot_1 was
created.

file_b will exist, copied from snapshot_1 into the current system.

file_c will exist, as its creation was outside the specified time.

Be aware that if file_b and file_c conflict, the system can become corrupted.

You can also use the snapper -c SnapperExample undochange 2..1 command. In this case,
the current system replaces the edited version of file_a with one copied from snapshot_1, which
undoes edits of that file made after snapshot_2 was created.

Using the mount and unmount Commands to Reverse Changes


The undochange command is not always the best way to revert modifications. With the status and
diff command, you can make a qualified decision, and use the mount and unmount commands
instead of Snapper. The mount and unmount commands are only useful if you want to mount snapshots
and browse their content independently of Snapper workflow.

If needed, the mount command activates respective LVM Snapper snapshot before mounting. Use the
mount and unmount commands if you are, for example, interested in mounting snapshots and
extracting older version of several files manually. To revert files manually, copy them from a mounted

128
CHAPTER 14. CREATING AND MAINTAINING SNAPSHOTS WITH SNAPPER

snapshot to the current file system. The current file system, snapshot 0, is the live file system created in
Procedure 14.1, “Creating a Snapper Configuration File”. Copy the files to the subtree of the original
/mount-point.

Use the mount and unmount commands for explicit client-side requests. The
/etc/snapper/configs/config_name file contains the ALLOW_USERS= and ALLOW_GROUPS=
variables where you can add users and groups. Then, snapperd allows you to perform mount
operations for the added users and groups.

14.5. DELETING A SNAPPER SNAPSHOT


To delete a snapshot:

# snapper -c config_name delete snapshot_number

You can use the list command to verify that the snapshot was successfully deleted.

129
Storage Administration Guide

CHAPTER 15. SWAP SPACE


Swap space in Linux is used when the amount of physical memory (RAM) is full. If the system needs
more memory resources and the RAM is full, inactive pages in memory are moved to the swap space.
While swap space can help machines with a small amount of RAM, it should not be considered a
replacement for more RAM. Swap space is located on hard drives, which have a slower access time than
physical memory. Swap space can be a dedicated swap partition (recommended), a swap file, or a
combination of swap partitions and swap files. Note that Btrfs does not support swap space.

In years past, the recommended amount of swap space increased linearly with the amount of RAM in the
system. However, modern systems often include hundreds of gigabytes of RAM. As a consequence,
recommended swap space is considered a function of system memory workload, not system memory.

Table 15.1, “Recommended System Swap Space” illustrates the recommended size of a swap partition
depending on the amount of RAM in your system and whether you want sufficient memory for your
system to hibernate. The recommended swap partition size is established automatically during
installation. To allow for hibernation, however, you need to edit the swap space in the custom partitioning
stage.

Recommendations in Table 15.1, “Recommended System Swap Space” are especially important on
systems with low memory (1 GB and less). Failure to allocate sufficient swap space on these systems
can cause issues such as instability or even render the installed system unbootable.

Table 15.1. Recommended System Swap Space

Amount of RAM in the system Recommended swap space Recommended swap space if
allowing for hibernation

⩽ 2 GB 2 times the amount of RAM 3 times the amount of RAM

> 2 GB – 8 GB Equal to the amount of RAM 2 times the amount of RAM

> 8 GB – 64 GB At least 4 GB 1.5 times the amount of RAM

> 64 GB At least 4 GB Hibernation not recommended

At the border between each range listed in Table 15.1, “Recommended System Swap Space”, for
example a system with 2 GB, 8 GB, or 64 GB of system RAM, discretion can be exercised with regard to
chosen swap space and hibernation support. If your system resources allow for it, increasing the swap
space may lead to better performance. A swap space of at least 100 GB is recommended for systems
with over 140 logical processors or over 3 TB of RAM.

Note that distributing swap space over multiple storage devices also improves swap space performance,
particularly on systems with fast drives, controllers, and interfaces.

130
CHAPTER 15. SWAP SPACE

IMPORTANT

File systems and LVM2 volumes assigned as swap space should not be in use when
being modified. Any attempts to modify swap fail if a system process or the kernel is using
swap space. Use the free and cat /proc/swaps commands to verify how much and
where swap is in use.

You should modify swap space while the system is booted in rescue mode, see Booting
Your Computer in Rescue Mode in the Red Hat Enterprise Linux 7 Installation Guide.
When prompted to mount the file system, select Skip.

15.1. ADDING SWAP SPACE


Sometimes it is necessary to add more swap space after installation. For example, you may upgrade the
amount of RAM in your system from 1 GB to 2 GB, but there is only 2 GB of swap space. It might be
advantageous to increase the amount of swap space to 4 GB if you perform memory-intense operations
or run applications that require a large amount of memory.

You have three options: create a new swap partition, create a new swap file, or extend swap on an
existing LVM2 logical volume. It is recommended that you extend an existing logical volume.

15.1.1. Extending Swap on an LVM2 Logical Volume


By default, Red Hat Enterprise Linux 7 uses all available space during installation. If this is the case with
your system, then you must first add a new physical volume to the volume group used by the swap
space.

After adding additional storage to the swap space's volume group, it is now possible to extend it. To do
so, perform the following procedure (assuming /dev/VolGroup00/LogVol01 is the volume you want
to extend by 2 GB):

Procedure 15.1. Extending Swap on an LVM2 Logical Volume

1. Disable swapping for the associated logical volume:

# swapoff -v /dev/VolGroup00/LogVol01

2. Resize the LVM2 logical volume by 2 GB:

# lvresize /dev/VolGroup00/LogVol01 -L +2G

3. Format the new swap space:

# mkswap /dev/VolGroup00/LogVol01

4. Enable the extended logical volume:

# swapon -v /dev/VolGroup00/LogVol01

5. To test if the swap logical volume was successfully extended and activated, inspect active swap
space:

131
Storage Administration Guide

$ cat /proc/swaps
$ free -h

15.1.2. Creating an LVM2 Logical Volume for Swap


To add a swap volume group 2 GB in size, assuming /dev/VolGroup00/LogVol02 is the swap
volume you want to add:

1. Create the LVM2 logical volume of size 2 GB:

# lvcreate VolGroup00 -n LogVol02 -L 2G

2. Format the new swap space:

# mkswap /dev/VolGroup00/LogVol02

3. Add the following entry to the /etc/fstab file:

/dev/VolGroup00/LogVol02 swap swap defaults 0 0

4. Regenerate mount units so that your system registers the new configuration:

# systemctl daemon-reload

5. Activate swap on the logical volume:

# swapon -v /dev/VolGroup00/LogVol02

6. To test if the swap logical volume was successfully created and activated, inspect active swap
space:

$ cat /proc/swaps
$ free -h

15.1.3. Creating a Swap File


To add a swap file:

Procedure 15.2. Add a Swap File

1. Determine the size of the new swap file in megabytes and multiply by 1024 to determine the
number of blocks. For example, the block size of a 64 MB swap file is 65536.

2. Create an empty file:

# dd if=/dev/zero of=/swapfile bs=1024 count=65536

Replace count with the value equal to the desired block size.

3. Set up the swap file with the command:

132
CHAPTER 15. SWAP SPACE

# mkswap /swapfile

4. Change the security of the swap file so it is not world readable.

# chmod 0600 /swapfile

5. To enable the swap file at boot time, edit /etc/fstab as root to include the following entry:

/swapfile swap swap defaults 0 0

The next time the system boots, it activates the new swap file.

6. Regenerate mount units so that your system registers the new /etc/fstab configuration:

# systemctl daemon-reload

7. To activate the swap file immediately:

# swapon /swapfile

8. To test if the new swap file was successfully created and activated, inspect active swap space:

$ cat /proc/swaps
$ free -h

15.2. REMOVING SWAP SPACE


Sometimes it can be prudent to reduce swap space after installation. For example, you have
downgraded the amount of RAM in your system from 1 GB to 512 MB, but there is 2 GB of swap space
still assigned. It might be advantageous to reduce the amount of swap space to 1 GB, since the larger 2
GB could be wasting disk space.

You have three options: remove an entire LVM2 logical volume used for swap, remove a swap file, or
reduce swap space on an existing LVM2 logical volume.

15.2.1. Reducing Swap on an LVM2 Logical Volume


To reduce an LVM2 swap logical volume (assuming /dev/VolGroup00/LogVol01 is the volume you
want to reduce):

Procedure 15.3. Reducing an LVM2 Swap Logical Volume

1. Disable swapping for the associated logical volume:

# swapoff -v /dev/VolGroup00/LogVol01

2. Reduce the LVM2 logical volume by 512 MB:

# lvreduce /dev/VolGroup00/LogVol01 -L -512M

3. Format the new swap space:

133
Storage Administration Guide

# mkswap /dev/VolGroup00/LogVol01

4. Activate swap on the logical volume:

# swapon -v /dev/VolGroup00/LogVol01

5. To test if the swap logical volume was successfully reduced, inspect active swap space:

$ cat /proc/swaps
$ free -h

15.2.2. Removing an LVM2 Logical Volume for Swap


To remove a swap volume group (assuming /dev/VolGroup00/LogVol02 is the swap volume you
want to remove):

Procedure 15.4. Remove a Swap Volume Group

1. Disable swapping for the associated logical volume:

# swapoff -v /dev/VolGroup00/LogVol02

2. Remove the LVM2 logical volume:

# lvremove /dev/VolGroup00/LogVol02

3. Remove the following associated entry from the /etc/fstab file:

/dev/VolGroup00/LogVol02 swap swap defaults 0 0

4. Regenerate mount units so that your system registers the new configuration:

# systemctl daemon-reload

5. To test if the logical volume was successfully removed, inspect active swap space:

$ cat /proc/swaps
$ free -h

15.2.3. Removing a Swap File


To remove a swap file:

Procedure 15.5. Remove a Swap File

1. At a shell prompt, execute the following command to disable the swap file (where /swapfile is
the swap file):

# swapoff -v /swapfile

134
CHAPTER 15. SWAP SPACE

2. Remove its entry from the /etc/fstab file.

3. Regenerate mount units so that your system registers the new configuration:

# systemctl daemon-reload

4. Remove the actual file:

# rm /swapfile

15.3. MOVING SWAP SPACE


To move swap space from one location to another:

1. Removing swap space Section 15.2, “Removing Swap Space”.

2. Adding swap space Section 15.1, “Adding Swap Space”.

135
Storage Administration Guide

CHAPTER 16. SYSTEM STORAGE MANAGER (SSM)


System Storage Manager (SSM) provides a command line interface to manage storage in various
technologies. Storage systems are becoming increasingly complicated through the use of Device
Mappers (DM), Logical Volume Managers (LVM), and Multiple Devices (MD). This creates a system that
is not user friendly and makes it easier for errors and problems to arise. SSM alleviates this by creating a
unified user interface. This interface allows users to run complicated systems in a simple manner. For
example, to create and mount a new file system without SSM, there are five commands that must be
used. With SSM only one is needed.

This chapter explains how SSM interacts with various back ends and some common use cases.

16.1. SSM BACK ENDS


SSM uses a core abstraction layer in ssmlib/main.py which complies with the device, pool, and
volume abstraction, ignoring the specifics of the underlying technology. Back ends can be registered in
ssmlib/main.py to handle specific storage technology methods, such as create, snapshot, or to
remove volumes and pools.

There are already several back ends registered in SSM. The following sections provide basic information
on them as well as definitions on how they handle pools, volumes, snapshots, and devices.

16.1.1. Btrfs Back End

NOTE

Btrfs is available as a Technology Preview feature in Red Hat Enterprise Linux 7 but has
been deprecated since the Red Hat Enterprise Linux 7.4 release. It will be removed in a
future major release of Red Hat Enterprise Linux.

For more information, see Deprecated Functionality in the Red Hat Enterprise Linux 7.4
Release Notes.

Btrfs, a file system with many advanced features, is used as a volume management back end in SSM.
Pools, volumes, and snapshots can be created with the Btrfs back end.

16.1.1.1. Btrfs Pool

The Btrfs file system itself is the pool. It can be extended by adding more devices or shrunk by removing
devices. SSM creates a Btrfs file system when a Btrfs pool is created. This means that every new Btrfs
pool has one volume of the same name as the pool which cannot be removed without removing the
entire pool. The default Btrfs pool name is btrfs_pool.

The name of the pool is used as the file system label. If there is already an existing Btrfs file system in
the system without a label, the Btrfs pool will generate a name for internal use in the format of
btrfs_device_base_name.

16.1.1.2. Btrfs Volume

Volumes created after the first volume in a pool are the same as sub-volumes. SSM temporarily mounts
the Btrfs file system if it is unmounted in order to create a sub-volume.

136
CHAPTER 16. SYSTEM STORAGE MANAGER (SSM)

The name of a volume is used as the subvolume path in the Btrfs file system. For example, a subvolume
displays as /dev/lvm_pool/lvol001. Every object in this path must exist in order for the volume to
be created. Volumes can also be referenced with its mount point.

16.1.1.3. Btrfs Snapshot

Snapshots can be taken of any Btrfs volume in the system with SSM. Be aware that Btrfs does not
distinguish between subvolumes and snapshots. While this means that SSM cannot recognize the Btrfs
snapshot destination, it will try to recognize special name formats. If the name specified when creating
the snapshot does the specific pattern, the snapshot is not be recognized and instead be listed as a
regular Btrfs volume.

16.1.1.4. Btrfs Device

Btrfs does not require any special device to be created on.

16.1.2. LVM Back End


Pools, volumes, and snapshots can be created with LVM. The following definitions are from an LVM
point of view.

16.1.2.1. LVM Pool

LVM pool is the same as an LVM volume group. This means that grouping devices and new logical
volumes can be created out of the LVM pool. The default LVM pool name is lvm_pool.

16.1.2.2. LVM Volume

An LVM volume is the same as an ordinary logical volume.

16.1.2.3. LVM Snapshot

When a snapshot is created from the LVM volume a new snapshot volume is created which can then
be handled just like any other LVM volume. Unlike Btrfs, LVM is able to distinguish snapshots from
regular volumes so there is no need for a snapshot name to match a particular pattern.

16.1.2.4. LVM Device

SSM makes the need for an LVM back end to be created on a physical device transparent for the user.

16.1.3. Crypt Back End


The crypt back end in SSM uses cryptsetup and dm-crypt target to manage encrypted volumes.
Crypt back ends can be used as a regular back end for creating encrypted volumes on top of regular
block devices (or on other volumes such as LVM or MD volumes), or to create encrypted LVM volumes
in a single steps.

Only volumes can be created with a crypt back end; pooling is not supported and it does not require
special devices.

The following sections define volumes and snapshots from the crypt point of view.

16.1.3.1. Crypt Volume

137
Storage Administration Guide

Crypt volumes are created by dm-crypt and represent the data on the original encrypted device in an
unencrypted form. It does not support RAID or any device concatenation.

Two modes, or extensions, are supported: luks and plain. Luks is used by default. For more information
on the extensions, see man cryptsetup.

16.1.3.2. Crypt Snapshot

While the crypt back end does not support snapshotting, if the encrypted volume is created on top of an
LVM volume, the volume itself can be snapshotted. The snapshot can then be opened by using
cryptsetup.

16.1.4. Multiple Devices (MD) Back End


MD back end is currently limited to only gathering the information about MD volumes in the system.

16.2. COMMON SSM TASKS


The following sections describe common SSM tasks.

16.2.1. Installing SSM


To install SSM use the following command:

# yum install system-storage-manager

There are several back ends that are enabled only if the supporting packages are installed:

The LVM back end requires the lvm2 package.

The Btrfs back end requires the btrfs-progs package.

The Crypt back end requires the device-mapper and cryptsetup packages.

16.2.2. Displaying Information about All Detected Devices


Displaying information about all detected devices, pools, volumes, and snapshots is done with the list
command. The ssm list command with no options display the following output:

# ssm list
----------------------------------------------------------
Device Free Used Total Pool Mount point
----------------------------------------------------------
/dev/sda 2.00 GB PARTITIONED
/dev/sda1 47.83 MB /test
/dev/vda 15.00 GB PARTITIONED
/dev/vda1 500.00 MB /boot
/dev/vda2 0.00 KB 14.51 GB 14.51 GB rhel
----------------------------------------------------------
------------------------------------------------
Pool Type Devices Free Used Total
------------------------------------------------
rhel lvm 1 0.00 KB 14.51 GB 14.51 GB

138
CHAPTER 16. SYSTEM STORAGE MANAGER (SSM)

------------------------------------------------
----------------------------------------------------------------------
-----------
Volume Pool Volume size FS FS size Free Type
Mount point
----------------------------------------------------------------------
-----------
/dev/rhel/root rhel 13.53 GB xfs 13.52 GB 9.64 GB linear /
/dev/rhel/swap rhel 1000.00 MB linear
/dev/sda1 47.83 MB xfs 44.50 MB 44.41 MB part
/test
/dev/vda1 500.00 MB xfs 496.67 MB 403.56 MB part
/boot
----------------------------------------------------------------------
-----------

This display can be further narrowed down by using arguments to specify what should be displayed. The
list of available options can be found with the ssm list --help command.

NOTE

Depending on the argument given, SSM may not display everything.

Running the devices or dev argument omits some devices. CDRoms and
DM/MD devices, for example, are intentionally hidden as they are listed as
volumes.

Some back ends do not support snapshots and cannot distinguish between a
snapshot and a regular volume. Running the snapshot argument on one of
these back ends cause SSM to attempt to recognize the volume name in order to
identify a snapshot. If the SSM regular expression does not match the snapshot
pattern then the snapshot is not be recognized.

With the exception of the main Btrfs volume (the file system itself), any
unmounted Btrfs volumes are not shown.

16.2.3. Creating a New Pool, Logical Volume, and File System


In this section, a new pool is be created with a default name which have the devices /dev/vdb and
/dev/vdc, a logical volume of 1G, and an XFS file system.

The command to create this scenario is ssm create --fs xfs -s 1G /dev/vdb /dev/vdc. The
following options are used:

The --fs option specifies the required file system type. Current supported file system types are:

ext3

ext4

xfs

btrfs

139
Storage Administration Guide

The -s specifies the size of the logical volume. The following suffixes are supported to define
units:

K or k for kilobytes

M or m for megabytes

G or g for gigabytes

T or t for terabytes

P or p for petabytes

E or e for exabytes

The two listed devices, /dev/vdb and /dev/vdc, are the two devices you wish to create.

# ssm create --fs xfs -s 1G /dev/vdb /dev/vdc


Physical volume "/dev/vdb" successfully created
Physical volume "/dev/vdc" successfully created
Volume group "lvm_pool" successfully created
Logical volume "lvol001" created

There are two other options for the ssm command that may be useful. The first is the -p pool
command. This specifies the pool the volume is to be created on. If it does not yet exist, then SSM
creates it. This was omitted in the given example which caused SSM to use the default name
lvm_pool. However, to use a specific name to fit in with any existing naming conventions, the -p
option should be used.

The second useful option is the -n name command. This names the newly created logical volume. As
with the -p, this is needed in order to use a specific name to fit in with any existing naming conventions.

An example of these two options being used follows:

# ssm create --fs xfs -p new_pool -n XFS_Volume /dev/vdd


Volume group "new_pool" successfully created
Logical volume "XFS_Volume" created

SSM has now created two physical volumes, a pool, and a logical volume with the ease of only one
command.

16.2.4. Checking a File System's Consistency


The ssm check command checks the file system consistency on the volume. It is possible to specify
multiple volumes to check. If there is no file system on the volume, then the volume is skipped.

To check all devices in the volume lvol001, run the command ssm check
/dev/lvm_pool/lvol001.

# ssm check /dev/lvm_pool/lvol001


Checking xfs file system on '/dev/mapper/lvm_pool-lvol001'.
Phase 1 - find and verify superblock...
Phase 2 - using internal log
- scan filesystem freespace and inode maps...

140
CHAPTER 16. SYSTEM STORAGE MANAGER (SSM)

- found root inode chunk


Phase 3 - for each AG...
- scan (but don't clear) agi unlinked lists...
- process known inodes and perform inode discovery...
- agno = 0
- agno = 1
- agno = 2
- agno = 3
- agno = 4
- agno = 5
- agno = 6
- process newly discovered inodes...
Phase 4 - check for duplicate blocks...
- setting up duplicate extent list...
- check for inodes claiming duplicate blocks...
- agno = 0
- agno = 1
- agno = 2
- agno = 3
- agno = 4
- agno = 5
- agno = 6
No modify flag set, skipping phase 5
Phase 6 - check inode connectivity...
- traversing filesystem ...
- traversal finished ...
- moving disconnected inodes to lost+found ...
Phase 7 - verify link counts...
No modify flag set, skipping filesystem flush and exiting.

16.2.5. Increasing a Volume's Size


The ssm resize command changes the size of the specified volume and file system. If there is no file
system then only the volume itself will be resized.

For this example, we currently have one logical volume on /dev/vdb that is 900MB called lvol001.

# ssm list
-----------------------------------------------------------------
Device Free Used Total Pool Mount point
-----------------------------------------------------------------
/dev/vda 15.00 GB PARTITIONED
/dev/vda1 500.00 MB /boot
/dev/vda2 0.00 KB 14.51 GB 14.51 GB rhel
/dev/vdb 120.00 MB 900.00 MB 1.00 GB lvm_pool
/dev/vdc 1.00 GB
-----------------------------------------------------------------
---------------------------------------------------------
Pool Type Devices Free Used Total
---------------------------------------------------------
lvm_pool lvm 1 120.00 MB 900.00 MB 1020.00 MB
rhel lvm 1 0.00 KB 14.51 GB 14.51 GB
---------------------------------------------------------
----------------------------------------------------------------------
----------------------

141
Storage Administration Guide

Volume Pool Volume size FS FS size Free


Type Mount point
----------------------------------------------------------------------
----------------------
/dev/rhel/root rhel 13.53 GB xfs 13.52 GB 9.64 GB
linear /
/dev/rhel/swap rhel 1000.00 MB
linear
/dev/lvm_pool/lvol001 lvm_pool 900.00 MB xfs 896.67 MB 896.54 MB
linear
/dev/vda1 500.00 MB xfs 496.67 MB 403.56 MB
part /boot
----------------------------------------------------------------------
----------------------

The logical volume needs to be increased by another 500MB. To do so we will need to add an extra
device to the pool:

~]# ssm resize -s +500M /dev/lvm_pool/lvol001 /dev/vdc


Physical volume "/dev/vdc" successfully created
Volume group "lvm_pool" successfully extended
Phase 1 - find and verify superblock...
Phase 2 - using internal log
- scan filesystem freespace and inode maps...
- found root inode chunk
Phase 3 - for each AG...
- scan (but don't clear) agi unlinked lists...
- process known inodes and perform inode discovery...
- agno = 0
- agno = 1
- agno = 2
- agno = 3
- process newly discovered inodes...
Phase 4 - check for duplicate blocks...
- setting up duplicate extent list...
- check for inodes claiming duplicate blocks...
- agno = 0
- agno = 1
- agno = 2
- agno = 3
No modify flag set, skipping phase 5
Phase 6 - check inode connectivity...
- traversing filesystem ...
- traversal finished ...
- moving disconnected inodes to lost+found ...
Phase 7 - verify link counts...
No modify flag set, skipping filesystem flush and exiting.
Extending logical volume lvol001 to 1.37 GiB
Logical volume lvol001 successfully resized
meta-data=/dev/mapper/lvm_pool-lvol001 isize=256 agcount=4,
agsize=57600 blks
= sectsz=512 attr=2, projid32bit=1
= crc=0
data = bsize=4096 blocks=230400, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=0

142
CHAPTER 16. SYSTEM STORAGE MANAGER (SSM)

log =internal bsize=4096 blocks=853, version=2


= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
data blocks changed from 230400 to 358400

SSM runs a check on the device and then extends the volume by the specified amount. This can be
verified with the ssm list command.

# ssm list
------------------------------------------------------------------
Device Free Used Total Pool Mount point
------------------------------------------------------------------
/dev/vda 15.00 GB PARTITIONED
/dev/vda1 500.00 MB /boot
/dev/vda2 0.00 KB 14.51 GB 14.51 GB rhel
/dev/vdb 0.00 KB 1020.00 MB 1.00 GB lvm_pool
/dev/vdc 640.00 MB 380.00 MB 1.00 GB lvm_pool
------------------------------------------------------------------
------------------------------------------------------
Pool Type Devices Free Used Total
------------------------------------------------------
lvm_pool lvm 2 640.00 MB 1.37 GB 1.99 GB
rhel lvm 1 0.00 KB 14.51 GB 14.51 GB
------------------------------------------------------
----------------------------------------------------------------------
------------------------
Volume Pool Volume size FS FS size
Free Type Mount point
----------------------------------------------------------------------
------------------------
/dev/rhel/root rhel 13.53 GB xfs 13.52 GB 9.64 GB
linear /
/dev/rhel/swap rhel 1000.00 MB
linear
/dev/lvm_pool/lvol001 lvm_pool 1.37 GB xfs 1.36 GB 1.36 GB
linear
/dev/vda1 500.00 MB xfs 496.67 MB 403.56 MB
part /boot
----------------------------------------------------------------------
------------------------

143
Storage Administration Guide

NOTE

It is only possible to decrease an LVM volume's size; it is not supported with other volume
types. This is done by using a - instead of a +. For example, to decrease the size of an
LVM volume by 50M the command would be:

# ssm resize -s-50M /dev/lvm_pool/lvol002


Rounding size to boundary between physical extents: 972.00 MiB
WARNING: Reducing active logical volume to 972.00 MiB
THIS MAY DESTROY YOUR DATA (filesystem etc.)
Do you really want to reduce lvol002? [y/n]: y
Reducing logical volume lvol002 to 972.00 MiB
Logical volume lvol002 successfully resized

Without either the + or -, the value is taken as absolute.

16.2.6. Snapshot
To take a snapshot of an existing volume, use the ssm snapshot command.

NOTE

This operation fails if the back end that the volume belongs to does not support
snapshotting.

To create a snapshot of the lvol001, use the following command:

# ssm snapshot /dev/lvm_pool/lvol001


Logical volume "snap20150519T130900" created

To verify this, use the ssm list, and note the extra snapshot section.

# ssm list
----------------------------------------------------------------
Device Free Used Total Pool Mount point
----------------------------------------------------------------
/dev/vda 15.00 GB PARTITIONED
/dev/vda1 500.00 MB /boot
/dev/vda2 0.00 KB 14.51 GB 14.51 GB rhel
/dev/vdb 0.00 KB 1020.00 MB 1.00 GB lvm_pool
/dev/vdc 1.00 GB
----------------------------------------------------------------
--------------------------------------------------------
Pool Type Devices Free Used Total
--------------------------------------------------------
lvm_pool lvm 1 0.00 KB 1020.00 MB 1020.00 MB
rhel lvm 1 0.00 KB 14.51 GB 14.51 GB
--------------------------------------------------------
----------------------------------------------------------------------
------------------------
Volume Pool Volume size FS FS size
Free Type Mount point
----------------------------------------------------------------------

144
CHAPTER 16. SYSTEM STORAGE MANAGER (SSM)

------------------------
/dev/rhel/root rhel 13.53 GB xfs 13.52 GB 9.64 GB
linear /
/dev/rhel/swap rhel 1000.00 MB
linear
/dev/lvm_pool/lvol001 lvm_pool 900.00 MB xfs 896.67 MB 896.54 MB
linear
/dev/vda1 500.00 MB xfs 496.67 MB 403.56 MB
part /boot
----------------------------------------------------------------------
------------------------
----------------------------------------------------------------------
------------
Snapshot Origin Pool Volume size
Size Type
----------------------------------------------------------------------
------------
/dev/lvm_pool/snap20150519T130900 lvol001 lvm_pool 120.00 MB 0.00 KB
linear
----------------------------------------------------------------------
------------

16.2.7. Removing an Item


The ssm remove is used to remove an item, either a device, pool, or volume.

NOTE

If a device is being used by a pool when removed, it will fail. This can be forced using the
-f argument.

If the volume is mounted when removed, it will fail. Unlike the device, it cannot be forced
with the -f argument.

To remove the lvm_pool and everything within it use the following command:

# ssm remove lvm_pool


Do you really want to remove volume group "lvm_pool" containing 2 logical
volumes? [y/n]: y
Do you really want to remove active logical volume snap20150519T130900?
[y/n]: y
Logical volume "snap20150519T130900" successfully removed
Do you really want to remove active logical volume lvol001? [y/n]: y
Logical volume "lvol001" successfully removed
Volume group "lvm_pool" successfully removed

16.3. SSM RESOURCES


For more information on SSM, see the following resources:

The man ssm page provides good descriptions and examples, as well as details on all of the
commands and options too specific to be documented here.

145
Storage Administration Guide

Local documentation for SSM is stored in the doc/ directory.

The SSM wiki can be accessed at http://storagemanager.sourceforge.net/index.html.

The mailing list can be subscribed from https://lists.sourceforge.net/lists/listinfo/storagemanager-


devel and mailing list archives from http://sourceforge.net/mailarchive/forum.php?
forum_name=storagemanager-devel. The mailing list is where developers communicate. There
is currently no user mailing list so feel free to post questions there as well.

146
CHAPTER 17. DISK QUOTAS

CHAPTER 17. DISK QUOTAS


Disk space can be restricted by implementing disk quotas which alert a system administrator before a
user consumes too much disk space or a partition becomes full.

Disk quotas can be configured for individual users as well as user groups. This makes it possible to
manage the space allocated for user-specific files (such as email) separately from the space allocated to
the projects a user works on (assuming the projects are given their own groups).

In addition, quotas can be set not just to control the number of disk blocks consumed but to control the
number of inodes (data structures that contain information about files in UNIX file systems). Because
inodes are used to contain file-related information, this allows control over the number of files that can be
created.

The quota RPM must be installed to implement disk quotas.

NOTE

This chapter is for all file systems, however some file systems have their own quota
management tools. See the corresponding description for the applicable file systems.

For XFS file systems, see Section 3.3, “XFS Quota Management”.

Btrfs does not have disk quotas so is not covered.

17.1. CONFIGURING DISK QUOTAS


To implement disk quotas, use the following steps:

1. Enable quotas per file system by modifying the /etc/fstab file.

2. Remount the file system(s).

3. Create the quota database files and generate the disk usage table.

4. Assign quota policies.

Each of these steps is discussed in detail in the following sections.

17.1.1. Enabling Quotas

Procedure 17.1. Enabling Quotas

1. Log in as root.

2. Edit the /etc/fstab file.

3. Add either the usrquota or grpquota or both options to the file systems that require quotas.

Example 17.1. Edit /etc/fstab

For example, to use the text editor vim type the following:

# vim /etc/fstab

147
Storage Administration Guide

Example 17.2. Add Quotas

/dev/VolGroup00/LogVol00 / ext3 defaults 1 1


LABEL=/boot /boot ext3 defaults 1 2
none /dev/pts devpts gid=5,mode=620 0 0
none /dev/shm tmpfs defaults 0 0
none /proc proc defaults 0 0
none /sys sysfs defaults 0 0
/dev/VolGroup00/LogVol02 /home ext3 defaults,usrquota,grpquota
1 2
/dev/VolGroup00/LogVol01 swap swap defaults 0 0 . . .

In this example, the /home file system has both user and group quotas enabled.

NOTE

The following examples assume that a separate /home partition was created during the
installation of Red Hat Enterprise Linux. The root (/) partition can be used for setting
quota policies in the /etc/fstab file.

17.1.2. Remounting the File Systems


After adding either the usrquota or grpquota or both options, remount each file system whose fstab
entry has been modified. If the file system is not in use by any process, use one of the following
methods:

Run the umount command followed by the mount command to remount the file system. See the
man page for both umount and mount for the specific syntax for mounting and unmounting
various file system types.

Run the mount -o remount file-system command (where file-system is the name of
the file system) to remount the file system. For example, to remount the /home file system, run
the mount -o remount /home command.

If the file system is currently in use, the easiest method for remounting the file system is to reboot the
system.

17.1.3. Creating the Quota Database Files


After each quota-enabled file system is remounted run the quotacheck command.

The quotacheck command examines quota-enabled file systems and builds a table of the current disk
usage per file system. The table is then used to update the operating system's copy of disk usage. In
addition, the file system's disk quota files are updated.

NOTE

The quotacheck command has no effect on XFS as the table of disk usage is completed
automatically at mount time. See the man page xfs_quota(8) for more information.

148
CHAPTER 17. DISK QUOTAS

Procedure 17.2. Creating the Quota Database Files

1. Create the quota files on the file system using the following command:

# quotacheck -cug /file system

2. Generate the table of current disk usage per file system using the following command:

# quotacheck -avug

Following are the options used to create quota files:

c
Specifies that the quota files should be created for each file system with quotas enable.

u
Checks for user quotas.

g
Checks for group quotas. If only -g is specified, only the group quota file is created.

If neither the -u or -g options are specified, only the user quota file is created.

The following options are used to generate the table of current disk usage:

a
Check all quota-enabled, locally-mounted file systems

v
Display verbose status information as the quota check proceeds

u
Check user disk quota information

g
Check group disk quota information

After quotacheck has finished running, the quota files corresponding to the enabled quotas (either user
or group or both) are populated with data for each quota-enabled locally-mounted file system such as
/home.

17.1.4. Assigning Quotas per User


The last step is assigning the disk quotas with the edquota command.

Prerequisite

User must exist prior to setting the user quota.

Procedure 17.3. Assigning Quotas per User

149
Storage Administration Guide

Procedure 17.3. Assigning Quotas per User

1. To assign the quota for a user, use the following command:

# edquota username

Replace username with the user to which you want to assign the quotas.

2. To verify that the quota for the user has been set, use the following command:

# quota username

Example 17.3. Assigning Quotas to a user

For example, if a quota is enabled in /etc/fstab for the /home partition


(/dev/VolGroup00/LogVol02 in the following example) and the command edquota testuser
is executed, the following is shown in the editor configured as the default for the system:

Disk quotas for user testuser (uid 501):


Filesystem blocks soft hard inodes soft
hard
/dev/VolGroup00/LogVol02 440436 0 0 37418 0
0

NOTE

The text editor defined by the EDITOR environment variable is used by edquota. To
change the editor, set the EDITOR environment variable in your ~/.bash_profile file
to the full path of the editor of your choice.

The first column is the name of the file system that has a quota enabled for it. The second column shows
how many blocks the user is currently using. The next two columns are used to set soft and hard block
limits for the user on the file system. The inodes column shows how many inodes the user is currently
using. The last two columns are used to set the soft and hard inode limits for the user on the file system.

The hard block limit is the absolute maximum amount of disk space that a user or group can use. Once
this limit is reached, no further disk space can be used.

The soft block limit defines the maximum amount of disk space that can be used. However, unlike the
hard limit, the soft limit can be exceeded for a certain amount of time. That time is known as the grace
period. The grace period can be expressed in seconds, minutes, hours, days, weeks, or months.

If any of the values are set to 0, that limit is not set. In the text editor, change the desired limits.

Example 17.4. Change Desired Limits

For example:

Disk quotas for user testuser (uid 501):


Filesystem blocks soft hard inodes soft
hard

150
CHAPTER 17. DISK QUOTAS

/dev/VolGroup00/LogVol02 440436 500000 550000 37418 0


0

To verify that the quota for the user has been set, use the command:

# quota testuser
Disk quotas for user username (uid 501):
Filesystem blocks quota limit grace files quota limit
grace
/dev/sdb 1000* 1000 1000 0 0 0

17.1.5. Assigning Quotas per Group


Quotas can also be assigned on a per-group basis.

Prerequisite

Group must exist prior to setting the group quota.

Procedure 17.4. Assigning Quotas per Group

1. To set a group quota, use the following command:

# edquota -g groupname

2. To verify that the group quota is set, use the following command:

# quota -g groupname

Example 17.5. Assigning quotas to group

For example, to set a group quota for the devel group, use the command:

# edquota -g devel

This command displays the existing quota for the group in the text editor:

Disk quotas for group devel (gid 505):


Filesystem blocks soft hard inodes soft
hard
/dev/VolGroup00/LogVol02 440400 0 0 37418 0
0

Modify the limits, then save the file.

To verify that the group quota has been set, use the command:

# quota -g devel

151
Storage Administration Guide

17.1.6. Setting the Grace Period for Soft Limits


If a given quota has soft limits, you can edit the grace period (i.e. the amount of time a soft limit can be
exceeded) with the following command:

# edquota -t

This command works on quotas for inodes or blocks, for either users or groups.

IMPORTANT

While other edquota commands operate on quotas for a particular user or group, the -t
option operates on every file system with quotas enabled.

17.2. MANAGING DISK QUOTAS


If quotas are implemented, they need some maintenance mostly in the form of watching to see if the
quotas are exceeded and making sure the quotas are accurate.

If users repeatedly exceed their quotas or consistently reach their soft limits, a system administrator has
a few choices to make depending on what type of users they are and how much disk space impacts their
work. The administrator can either help the user determine how to use less disk space or increase the
user's disk quota.

17.2.1. Enabling and Disabling


It is possible to disable quotas without setting them to 0. To turn all user and group quotas off, use the
following command:

# quotaoff -vaug

If neither the -u or -g options are specified, only the user quotas are disabled. If only -g is specified,
only group quotas are disabled. The -v switch causes verbose status information to display as the
command executes.

To enable user and group quotas again, use the following command:

# quotaon

To enable user and group quotas for all file systems, use the following command:

# quotaon -vaug

If neither the -u or -g options are specified, only the user quotas are enabled. If only -g is specified,
only group quotas are enabled.

To enable quotas for a specific file system, such as /home, use the following command:

# quotaon -vug /home

152
CHAPTER 17. DISK QUOTAS

NOTE

The quotaon command is not always needed for XFS because it is performed
automatically at mount time. Refer to the man page quotaon(8) for more information.

17.2.2. Reporting on Disk Quotas


Creating a disk usage report entails running the repquota utility.

Example 17.6. Output of the repquota Command

For example, the command repquota /home produces this output:

*** Report for user quotas on device /dev/mapper/VolGroup00-LogVol02


Block grace time: 7days; Inode grace time: 7days
Block limits File limits
User used soft hard grace used soft hard grace
--------------------------------------------------------------------
--
root -- 36 0 0 4 0 0
kristin -- 540 0 0 125 0 0
testuser -- 440400 500000 550000 37418 0 0

To view the disk usage report for all (option -a) quota-enabled file systems, use the command:

# repquota -a

While the report is easy to read, a few points should be explained. The -- displayed after each user is a
quick way to determine whether the block or inode limits have been exceeded. If either soft limit is
exceeded, a + appears in place of the corresponding -; the first - represents the block limit, and the
second represents the inode limit.

The grace columns are normally blank. If a soft limit has been exceeded, the column contains a time
specification equal to the amount of time remaining on the grace period. If the grace period has expired,
none appears in its place.

17.2.3. Keeping Quotas Accurate


When a file system fails to unmount cleanly, for example due to a system crash, it is necessary to run
the following command:

# quotacheck

However, quotacheck can be run on a regular basis, even if the system has not crashed. Safe
methods for periodically running quotacheck include:

Ensuring quotacheck runs on next reboot

NOTE

This method works best for (busy) multiuser systems which are periodically rebooted.

153
Storage Administration Guide

Save a shell script into the /etc/cron.daily/ or /etc/cron.weekly/ directory or schedule one
using the following command:

# crontab -e

The crontab -e command contains the touch /forcequotacheck command. This creates an
empty forcequotacheck file in the root directory, which the system init script looks for at boot time.
If it is found, the init script runs quotacheck. Afterward, the init script removes the
/forcequotacheck file; thus, scheduling this file to be created periodically with cron ensures that
quotacheck is run during the next reboot.

For more information about cron, see man cron.

Running quotacheck in single user mode


An alternative way to safely run quotacheck is to boot the system into single-user mode to prevent
the possibility of data corruption in quota files and run the following commands:

# quotaoff -vug /file_system

# quotacheck -vug /file_system

# quotaon -vug /file_system

Running quotacheck on a running system


If necessary, it is possible to run quotacheck on a machine during a time when no users are logged
in, and thus have no open files on the file system being checked. Run the command quotacheck -
vug file_system; this command will fail if quotacheck cannot remount the given file_system as
read-only. Note that, following the check, the file system will be remounted read-write.


WARNING

Running quotacheck on a live file system mounted read-write is not


recommended due to the possibility of quota file corruption.

See man cron for more information about configuring cron.

17.3. DISK QUOTA REFERENCES


For more information on disk quotas, refer to the man pages of the following commands:

quotacheck

edquota

repquota

154
CHAPTER 17. DISK QUOTAS

quota

quotaon

quotaoff

155
Storage Administration Guide

CHAPTER 18. REDUNDANT ARRAY OF INDEPENDENT DISKS


(RAID)
The basic idea behind RAID is to combine multiple small, inexpensive disk drives into an array to
accomplish performance or redundancy goals not attainable with one large and expensive drive. This
array of drives appears to the computer as a single logical storage unit or drive.

RAID allows information to be spread across several disks. RAID uses techniques such as disk striping
(RAID Level 0), disk mirroring (RAID Level 1), and disk striping with parity (RAID Level 5) to achieve
redundancy, lower latency, increased bandwidth, and maximized ability to recover from hard disk
crashes.

RAID distributes data across each drive in the array by breaking it down into consistently-sized chunks
(commonly 256K or 512k, although other values are acceptable). Each chunk is then written to a hard
drive in the RAID array according to the RAID level employed. When the data is read, the process is
reversed, giving the illusion that the multiple drives in the array are actually one large drive.

System Administrators and others who manage large amounts of data would benefit from using RAID
technology. Primary reasons to deploy RAID include:

Enhances speed

Increases storage capacity using a single virtual disk

Minimizes data loss from disk failure

18.1. RAID TYPES


There are three possible RAID approaches: Firmware RAID, Hardware RAID, and Software RAID.

Firmware RAID
Firmware RAID, also known as ATARAID, is a type of software RAID where the RAID sets can be
configured using a firmware-based menu. The firmware used by this type of RAID also hooks into the
BIOS, allowing you to boot from its RAID sets. Different vendors use different on-disk metadata formats
to mark the RAID set members. The Intel Matrix RAID is a good example of a firmware RAID system.

Hardware RAID
The hardware-based array manages the RAID subsystem independently from the host. It presents a
single disk per RAID array to the host.

A Hardware RAID device may be internal or external to the system, with internal devices commonly
consisting of a specialized controller card that handles the RAID tasks transparently to the operating
system and with external devices commonly connecting to the system via SCSI, Fibre Channel, iSCSI,
InfiniBand, or other high speed network interconnect and presenting logical volumes to the system.

RAID controller cards function like a SCSI controller to the operating system, and handle all the actual
drive communications. The user plugs the drives into the RAID controller (just like a normal SCSI
controller) and then adds them to the RAID controllers configuration. The operating system will not be
able to tell the difference.

Software RAID

156
CHAPTER 18. REDUNDANT ARRAY OF INDEPENDENT DISKS (RAID)

Software RAID implements the various RAID levels in the kernel disk (block device) code. It offers the
cheapest possible solution, as expensive disk controller cards or hot-swap chassis [2] are not required.
Software RAID also works with cheaper IDE disks as well as SCSI disks. With today's faster CPUs,
Software RAID also generally outperforms Hardware RAID.

The Linux kernel contains a multi-disk (MD) driver that allows the RAID solution to be completely
hardware independent. The performance of a software-based array depends on the server CPU
performance and load.

Key features of the Linux software RAID stack:

Multithreaded design

Portability of arrays between Linux machines without reconstruction

Backgrounded array reconstruction using idle system resources

Hot-swappable drive support

Automatic CPU detection to take advantage of certain CPU features such as streaming SIMD
support

Automatic correction of bad sectors on disks in an array

Regular consistency checks of RAID data to ensure the health of the array

Proactive monitoring of arrays with email alerts sent to a designated email address on important
events

Write-intent bitmaps which drastically increase the speed of resync events by allowing the kernel
to know precisely which portions of a disk need to be resynced instead of having to resync the
entire array

Resync checkpointing so that if you reboot your computer during a resync, at startup the resync
will pick up where it left off and not start all over again

The ability to change parameters of the array after installation. For example, you can grow a 4-
disk RAID5 array to a 5-disk RAID5 array when you have a new disk to add. This grow operation
is done live and does not require you to reinstall on the new array.

18.2. RAID LEVELS AND LINEAR SUPPORT


RAID supports various configurations, including levels 0, 1, 4, 5, 6, 10, and linear. These RAID types are
defined as follows:

Level 0
RAID level 0, often called "striping," is a performance-oriented striped data mapping technique. This
means the data being written to the array is broken down into strips and written across the member
disks of the array, allowing high I/O performance at low inherent cost but provides no redundancy.

Many RAID level 0 implementations will only stripe the data across the member devices up to the size
of the smallest device in the array. This means that if you have multiple devices with slightly different
sizes, each device will get treated as though it is the same size as the smallest drive. Therefore, the
common storage capacity of a level 0 array is equal to the capacity of the smallest member disk in a
Hardware RAID or the capacity of smallest member partition in a Software RAID multiplied by the
number of disks or partitions in the array.

157
Storage Administration Guide

Level 1
RAID level 1, or "mirroring," has been used longer than any other form of RAID. Level 1 provides
redundancy by writing identical data to each member disk of the array, leaving a "mirrored" copy on
each disk. Mirroring remains popular due to its simplicity and high level of data availability. Level 1
operates with two or more disks, and provides very good data reliability and improves performance
for read-intensive applications but at a relatively high cost. [3]

The storage capacity of the level 1 array is equal to the capacity of the smallest mirrored hard disk in
a Hardware RAID or the smallest mirrored partition in a Software RAID. Level 1 redundancy is the
highest possible among all RAID types, with the array being able to operate with only a single disk
present.

Level 4

Level 4 uses parity [4] concentrated on a single disk drive to protect data. Because the dedicated
parity disk represents an inherent bottleneck on all write transactions to the RAID array, level 4 is
seldom used without accompanying technologies such as write-back caching, or in specific
circumstances where the system administrator is intentionally designing the software RAID device
with this bottleneck in mind (such as an array that will have little to no write transactions once the
array is populated with data). RAID level 4 is so rarely used that it is not available as an option in
Anaconda. However, it could be created manually by the user if truly needed.

The storage capacity of Hardware RAID level 4 is equal to the capacity of the smallest member
partition multiplied by the number of partitions minus one. Performance of a RAID level 4 array will
always be asymmetrical, meaning reads will outperform writes. This is because writes consume extra
CPU and main memory bandwidth when generating parity, and then also consume extra bus
bandwidth when writing the actual data to disks because you are writing not only the data, but also the
parity. Reads need only read the data and not the parity unless the array is in a degraded state. As a
result, reads generate less traffic to the drives and across the busses of the computer for the same
amount of data transfer under normal operating conditions.

Level 5
This is the most common type of RAID. By distributing parity across all of an array's member disk
drives, RAID level 5 eliminates the write bottleneck inherent in level 4. The only performance
bottleneck is the parity calculation process itself. With modern CPUs and Software RAID, that is
usually not a bottleneck at all since modern CPUs can generate parity very fast. However, if you have
a sufficiently large number of member devices in a software RAID5 array such that the combined
aggregate data transfer speed across all devices is high enough, then this bottleneck can start to
come into play.

As with level 4, level 5 has asymmetrical performance, with reads substantially outperforming writes.
The storage capacity of RAID level 5 is calculated the same way as with level 4.

Level 6
This is a common level of RAID when data redundancy and preservation, and not performance, are
the paramount concerns, but where the space inefficiency of level 1 is not acceptable. Level 6 uses a
complex parity scheme to be able to recover from the loss of any two drives in the array. This
complex parity scheme creates a significantly higher CPU burden on software RAID devices and also
imposes an increased burden during write transactions. As such, level 6 is considerably more
asymmetrical in performance than levels 4 and 5.

The total capacity of a RAID level 6 array is calculated similarly to RAID level 5 and 4, except that you
must subtract 2 devices (instead of 1) from the device count for the extra parity storage space.

158
CHAPTER 18. REDUNDANT ARRAY OF INDEPENDENT DISKS (RAID)

Level 10
This RAID level attempts to combine the performance advantages of level 0 with the redundancy of
level 1. It also helps to alleviate some of the space wasted in level 1 arrays with more than 2 devices.
With level 10, it is possible to create a 3-drive array configured to store only 2 copies of each piece of
data, which then allows the overall array size to be 1.5 times the size of the smallest devices instead
of only equal to the smallest device (like it would be with a 3-device, level 1 array).

The number of options available when creating level 10 arrays as well as the complexity of selecting
the right options for a specific use case make it impractical to create during installation. It is possible
to create one manually using the command line mdadm tool. For more information on the options and
their respective performance trade-offs, see man md.

Linear RAID
Linear RAID is a grouping of drives to create a larger virtual drive. In linear RAID, the chunks are
allocated sequentially from one member drive, going to the next drive only when the first is completely
filled. This grouping provides no performance benefit, as it is unlikely that any I/O operations split
between member drives. Linear RAID also offers no redundancy and decreases reliability; if any one
member drive fails, the entire array cannot be used. The capacity is the total of all member disks.

18.3. LINUX RAID SUBSYSTEMS


RAID in Linux is composed of the following subsystems:

Linux Hardware RAID Controller Drivers


Hardware RAID controllers have no specific RAID subsystem in Linux. Because they use special RAID
chipsets, hardware RAID controllers come with their own drivers; these drivers allow the system to detect
the RAID sets as regular disks.

mdraid
The mdraid subsystem was designed as a software RAID solution for Linux; it is also the preferred
solution for software RAID under Linux. This subsystem uses its own metadata format, generally
referred to as native mdraid metadata.

mdraid also supports other metadata formats, known as external metadata. Red Hat Enterprise Linux 7
uses mdraid with external metadata to access ISW / IMSM (Intel firmware RAID) sets. mdraid sets are
configured and controlled through the mdadm utility.

dmraid
Device-mapper RAID or dmraid refers to device-mapper kernel code that offers the mechanism to piece
disks together into a RAID set. This same kernel code does not provide any RAID configuration
mechanism.

dmraid is configured entirely in user-space, making it easy to support various on-disk metadata formats.
As such, dmraid is used on a wide variety of firmware RAID implementations. dmraid also supports
Intel firmware RAID, although Red Hat Enterprise Linux 7 uses mdraid to access Intel firmware RAID
sets.

18.4. RAID SUPPORT IN THE ANACONDA INSTALLER

159
Storage Administration Guide

The Anaconda installer automatically detects any hardware and firmware RAID sets on a system,
making them available for installation. Anaconda also supports software RAID using mdraid, and can
recognize existing mdraid sets.

Anaconda provides utilities for creating RAID sets during installation; however, these utilities only allow
partitions (as opposed to entire disks) to be members of new sets. To use an entire disk for a set, create
a partition on it spanning the entire disk, and use that partition as the RAID set member.

When the root file system uses a RAID set, Anaconda adds special kernel command-line options to the
bootloader configuration telling the initrd which RAID set(s) to activate before searching for the root
file system.

For instructions on configuring RAID during installation, see the Red Hat Enterprise Linux 7 Installation
Guide.

18.5. CONVERTING ROOT DISK TO RAID1 AFTER INSTALLATION


If you need to convert a non-raided root disk to a RAID1 mirror after installing Red Hat
Enterprise Linux 7, see the instructions in the following Red Hat Knowledgebase article: How do I
convert my root disk to RAID1 after installation of Red Hat Enterprise Linux 7?

On the PowerPC (PPC) architecture, take the following additional steps:

1. Copy the contents of the PowerPC Reference Platform (PReP) boot partition from /dev/sda1
to /dev/sdb1:

# dd if=/dev/sda1 of=/dev/sdb1

2. Update the Prep and boot flag on the first partition on both disks:

$ parted /dev/sda set 1 prep on


$ parted /dev/sda set 1 boot on

$ parted /dev/sdb set 1 prep on


$ parted /dev/sdb set 1 boot on

NOTE

Running the grub2-install /dev/sda command does not work on a PowerPC


machine and returns an error, but the system boots as expected.

18.6. CONFIGURING RAID SETS


Most RAID sets are configured during creation, typically through the firmware menu or from the installer.
In some cases, you may need to create or modify RAID sets after installing the system, preferably
without having to reboot the machine and enter the firmware menu to do so.

Some hardware RAID controllers allow you to configure RAID sets on-the-fly or even define completely
new sets after adding extra disks. This requires the use of driver-specific utilities, as there is no standard
API for this. For more information, see your hardware RAID controller's driver documentation for
information on this.

mdadm

160
CHAPTER 18. REDUNDANT ARRAY OF INDEPENDENT DISKS (RAID)

The mdadm command-line tool is used to manage software RAID in Linux, i.e. mdraid. For information
on the different mdadm modes and options, see man mdadm. The man page also contains useful
examples for common operations like creating, monitoring, and assembling software RAID arrays.

dmraid
As the name suggests, dmraid is used to manage device-mapper RAID sets. The dmraid tool finds
ATARAID devices using multiple metadata format handlers, each supporting various formats. For a
complete list of supported formats, run dmraid -l.

As mentioned earlier in Section 18.3, “Linux RAID Subsystems”, the dmraid tool cannot configure RAID
sets after creation. For more information about using dmraid, see man dmraid.

18.7. CREATING ADVANCED RAID DEVICES


In some cases, you may wish to install the operating system on an array that can't be created after the
installation completes. Usually, this means setting up the /boot or root file system arrays on a complex
RAID device; in such cases, you may need to use array options that are not supported by Anaconda. To
work around this, perform the following procedure:

Procedure 18.1. Creating Advanced RAID Devices

1. Insert the install disk.

2. During the initial boot up, select Rescue Mode instead of Install or Upgrade. When the system
fully boots into Rescue mode, the user will be presented with a command line terminal.

3. From this terminal, use parted to create RAID partitions on the target hard drives. Then, use
mdadm to manually create raid arrays from those partitions using any and all settings and options
available. For more information on how to do these, see Chapter 13, Partitions, man parted,
and man mdadm.

4. Once the arrays are created, you can optionally create file systems on the arrays as well.

5. Reboot the computer and this time select Install or Upgrade to install as normal. As Anaconda
searches the disks in the system, it will find the pre-existing RAID devices.

6. When asked about how to use the disks in the system, select Custom Layout and click Next. In
the device listing, the pre-existing MD RAID devices will be listed.

7. Select a RAID device, click Edit and configure its mount point and (optionally) the type of file
system it should use (if you did not create one earlier) then click Done. Anaconda will perform
the install to this pre-existing RAID device, preserving the custom options you selected when you
created it in Rescue Mode.

NOTE

The limited Rescue Mode of the installer does not include man pages. Both the man
mdadm and man md contain useful information for creating custom RAID arrays, and may
be needed throughout the workaround. As such, it can be helpful to either have access to
a machine with these man pages present, or to print them out prior to booting into Rescue
Mode and creating your custom arrays.

161
Storage Administration Guide

[2] A hot-swap chassis allows you to remove a hard drive without having to power-down your system.

[3] RAID level 1 comes at a high cost because you write the same information to all of the disks in the array,
provides data reliability, but in a much less space-efficient manner than parity based RAID levels such as level 5.
However, this space inefficiency comes with a performance benefit: parity-based RAID levels consume
considerably more CPU power in order to generate the parity while RAID level 1 simply writes the same data more
than once to the multiple RAID members with very little CPU overhead. As such, RAID level 1 can outperform the
parity-based RAID levels on machines where software RAID is employed and CPU resources on the machine are
consistently taxed with operations other than RAID activities.

[4] Parity information is calculated based on the contents of the rest of the member disks in the array. This
information can then be used to reconstruct data when one disk in the array fails. The reconstructed data can then
be used to satisfy I/O requests to the failed disk before it is replaced and to repopulate the failed disk after it has
been replaced.

162
CHAPTER 19. USING THE MOUNT COMMAND

CHAPTER 19. USING THE MOUNT COMMAND


On Linux, UNIX, and similar operating systems, file systems on different partitions and removable
devices (CDs, DVDs, or USB flash drives for example) can be attached to a certain point (the mount
point) in the directory tree, and then detached again. To attach or detach a file system, use the mount or
umount command respectively. This chapter describes the basic use of these commands, as well as
some advanced topics, such as moving a mount point or creating shared subtrees.

19.1. LISTING CURRENTLY MOUNTED FILE SYSTEMS


To display all currently attached file systems, use the following command with no additional arguments:

$ mount

This command displays the list of known mount points. Each line provides important information about
the device name, the file system type, the directory in which it is mounted, and relevant mount options in
the following form:

device on directory type type (options)

The findmnt utility, which allows users to list mounted file systems in a tree-like form, is also available
from Red Hat Enterprise Linux 6.1. To display all currently attached file systems, run the findmnt
command with no additional arguments:

$ findmnt

19.1.1. Specifying the File System Type


By default, the output of the mount command includes various virtual file systems such as sysfs and
tmpfs. To display only the devices with a certain file system type, provide the -t option:

$ mount -t type

Similarly, to display only the devices with a certain file system using the findmnt command:

$ findmnt -t type

For a list of common file system types, see Table 19.1, “Common File System Types”. For an example
usage, see Example 19.1, “Listing Currently Mounted ext4 File Systems”.

Example 19.1. Listing Currently Mounted ext4 File Systems

Usually, both / and /boot partitions are formatted to use ext4. To display only the mount points that
use this file system, use the following command:

$ mount -t ext4
/dev/sda2 on / type ext4 (rw)
/dev/sda1 on /boot type ext4 (rw)

To list such mount points using the findmnt command, type:

163
Storage Administration Guide

$ findmnt -t ext4
TARGET SOURCE FSTYPE OPTIONS
/ /dev/sda2 ext4 rw,realtime,seclabel,barrier=1,data=ordered
/boot /dev/sda1 ext4 rw,realtime,seclabel,barrier=1,data=ordered

19.2. MOUNTING A FILE SYSTEM


To attach a certain file system, use the mount command in the following form:

$ mount [option…] device directory

The device can be identified by:

a full path to a block device: for example, /dev/sda3

a universally unique identifier (UUID): for example, UUID=34795a28-ca6d-4fd8-a347-


73671d0c19cb

a volume label: for example, LABEL=home

Note that while a file system is mounted, the original content of the directory is not accessible.

IMPORTANT

Linux does not prevent a user from mounting a file system to a directory with a file system
already attached to it. To determine whether a particular directory serves as a mount
point, run the findmnt utility with the directory as its argument and verify the exit code:

findmnt directory; echo $?

If no file system is attached to the directory, the given command returns 1.

When you run the mount command without all required information, that is without the device name, the
target directory, or the file system type, the mount reads the contents of the /etc/fstab file to check if
the given file system is listed. The /etc/fstab file contains a list of device names and the directories in
which the selected file systems are set to be mounted as well as the file system type and mount options.
Therefore, when mounting a file system that is specified in /etc/fstab, you can choose one of the
following options:

mount [option…] directory


mount [option…] device

Note that permissions are required to mount the file systems unless the command is run as root (see
Section 19.2.2, “Specifying the Mount Options”).

164
CHAPTER 19. USING THE MOUNT COMMAND

NOTE

To determine the UUID and—if the device uses it—the label of a particular device, use the
blkid command in the following form:

blkid device

For example, to display information about /dev/sda3:

# blkid /dev/sda3
/dev/sda3: LABEL="home" UUID="34795a28-ca6d-4fd8-a347-
73671d0c19cb" TYPE="ext3"

19.2.1. Specifying the File System Type


In most cases, mount detects the file system automatically. However, there are certain file systems, such
as NFS (Network File System) or CIFS (Common Internet File System), that are not recognized, and
need to be specified manually. To specify the file system type, use the mount command in the following
form:

$ mount -t type device directory

Table 19.1, “Common File System Types” provides a list of common file system types that can be used
with the mount command. For a complete list of all available file system types, see the section called
“Manual Page Documentation”.

Table 19.1. Common File System Types

Type Description

ext2 The ext2 file system.

ext3 The ext3 file system.

ext4 The ext4 file system.

btrfs The btrfs file system.

xfs The xfs file system.

iso9660 The ISO 9660 file system. It is commonly used by optical media, typically CDs.

jfs The JFS file system created by IBM.

nfs The NFS file system. It is commonly used to access files over the network.

nfs4 The NFSv4 file system. It is commonly used to access files over the network.

165
Storage Administration Guide

Type Description

ntfs The NTFS file system. It is commonly used on machines that are running the Windows
operating system.

udf The UDF file system. It is commonly used by optical media, typically DVDs.

vfat The FAT file system. It is commonly used on machines that are running the Windows
operating system, and on certain digital media such as USB flash drives or floppy disks.

See Example 19.2, “Mounting a USB Flash Drive” for an example usage.

Example 19.2. Mounting a USB Flash Drive

Older USB flash drives often use the FAT file system. Assuming that such drive uses the /dev/sdc1
device and that the /media/flashdisk/ directory exists, mount it to this directory by typing the
following at a shell prompt as root:

~]# mount -t vfat /dev/sdc1 /media/flashdisk

19.2.2. Specifying the Mount Options


To specify additional mount options, use the command in the following form:

mount -o options device directory

When supplying multiple options, do not insert a space after a comma, or mount interprets incorrectly
the values following spaces as additional parameters.

Table 19.2, “Common Mount Options” provides a list of common mount options. For a complete list of all
available options, consult the relevant manual page as referred to in the section called “Manual Page
Documentation”.

Table 19.2. Common Mount Options

Option Description

async Allows the asynchronous input/output operations on the file system.

auto Allows the file system to be mounted automatically using the mount -a command.

defaults Provides an alias for async,auto,dev,exec,nouser,rw,suid.

exec Allows the execution of binary files on the particular file system.

loop Mounts an image as a loop device.

166
CHAPTER 19. USING THE MOUNT COMMAND

Option Description

noauto Default behavior disallows the automatic mount of the file system using the mount -a
command.

noexec Disallows the execution of binary files on the particular file system.

nouser Disallows an ordinary user (that is, other than root) to mount and unmount the file
system.

remount Remounts the file system in case it is already mounted.

ro Mounts the file system for reading only.

rw Mounts the file system for both reading and writing.

user Allows an ordinary user (that is, other than root) to mount and unmount the file
system.

See Example 19.3, “Mounting an ISO Image” for an example usage.

Example 19.3. Mounting an ISO Image

An ISO image (or a disk image in general) can be mounted by using the loop device. Assuming that
the ISO image of the Fedora 14 installation disc is present in the current working directory and that
the /media/cdrom/ directory exists, mount the image to this directory by running the following
command:

# mount -o ro,loop Fedora-14-x86_64-Live-Desktop.iso /media/cdrom

Note that ISO 9660 is by design a read-only file system.

19.2.3. Sharing Mounts


Occasionally, certain system administration tasks require access to the same file system from more than
one place in the directory tree (for example, when preparing a chroot environment). This is possible, and
Linux allows you to mount the same file system to as many directories as necessary. Additionally, the
mount command implements the --bind option that provides a means for duplicating certain mounts.
Its usage is as follows:

$ mount --bind old_directory new_directory

Although this command allows a user to access the file system from both places, it does not apply on the
file systems that are mounted within the original directory. To include these mounts as well, use the
following command:

$ mount --rbind old_directory new_directory

167
Storage Administration Guide

Additionally, to provide as much flexibility as possible, Red Hat Enterprise Linux 7 implements the
functionality known as shared subtrees. This feature allows the use of the following four mount types:

Shared Mount
A shared mount allows the creation of an exact replica of a given mount point. When a mount point is
marked as a shared mount, any mount within the original mount point is reflected in it, and vice
versa. To change the type of a mount point to a shared mount, type the following at a shell prompt:

$ mount --make-shared mount_point

Alternatively, to change the mount type for the selected mount point and all mount points under it:

$ mount --make-rshared mount_point

See Example 19.4, “Creating a Shared Mount Point” for an example usage.

Example 19.4. Creating a Shared Mount Point

There are two places where other file systems are commonly mounted: the /media/ directory for
removable media, and the /mnt/ directory for temporarily mounted file systems. By using a
shared mount, you can make these two directories share the same content. To do so, as root,
mark the /media/ directory as shared:

# mount --bind /media /media


# mount --make-shared /media

Create its duplicate in /mnt/ by using the following command:

# mount --bind /media /mnt

It is now possible to verify that a mount within /media/ also appears in /mnt/. For example, if
the CD-ROM drive contains non-empty media and the /media/cdrom/ directory exists, run the
following commands:

# mount /dev/cdrom /media/cdrom


# ls /media/cdrom
EFI GPL isolinux LiveOS
# ls /mnt/cdrom
EFI GPL isolinux LiveOS

Similarly, it is possible to verify that any file system mounted in the /mnt/ directory is reflected in
/media/. For instance, if a non-empty USB flash drive that uses the /dev/sdc1 device is
plugged in and the /mnt/flashdisk/ directory is present, type:

# # mount /dev/sdc1 /mnt/flashdisk


# ls /media/flashdisk
en-US publican.cfg
# ls /mnt/flashdisk
en-US publican.cfg

Slave Mount

168
CHAPTER 19. USING THE MOUNT COMMAND

A slave mount allows the creation of a limited duplicate of a given mount point. When a mount point
is marked as a slave mount, any mount within the original mount point is reflected in it, but no mount
within a slave mount is reflected in its original. To change the type of a mount point to a slave mount,
type the following at a shell prompt:

mount --make-slave mount_point

Alternatively, it is possible to change the mount type for the selected mount point and all mount points
under it by typing:

mount --make-rslave mount_point

See Example 19.5, “Creating a Slave Mount Point” for an example usage.

Example 19.5. Creating a Slave Mount Point

This example shows how to get the content of the /media/ directory to appear in /mnt/ as well,
but without any mounts in the /mnt/ directory to be reflected in /media/. As root, first mark the
/media/ directory as shared:

~]# mount --bind /media /media


~]# mount --make-shared /media

Then create its duplicate in /mnt/, but mark it as "slave":

~]# mount --bind /media /mnt


~]# mount --make-slave /mnt

Now verify that a mount within /media/ also appears in /mnt/. For example, if the CD-ROM
drive contains non-empty media and the /media/cdrom/ directory exists, run the following
commands:

~]# mount /dev/cdrom /media/cdrom


~]# ls /media/cdrom
EFI GPL isolinux LiveOS
~]# ls /mnt/cdrom
EFI GPL isolinux LiveOS

Also verify that file systems mounted in the /mnt/ directory are not reflected in /media/. For
instance, if a non-empty USB flash drive that uses the /dev/sdc1 device is plugged in and the
/mnt/flashdisk/ directory is present, type:

~]# mount /dev/sdc1 /mnt/flashdisk


~]# ls /media/flashdisk
~]# ls /mnt/flashdisk
en-US publican.cfg

Private Mount
A private mount is the default type of mount, and unlike a shared or slave mount, it does not receive
or forward any propagation events. To explicitly mark a mount point as a private mount, type the
following at a shell prompt:

169
Storage Administration Guide

mount --make-private mount_point

Alternatively, it is possible to change the mount type for the selected mount point and all mount points
under it:

mount --make-rprivate mount_point

See Example 19.6, “Creating a Private Mount Point” for an example usage.

Example 19.6. Creating a Private Mount Point

Taking into account the scenario in Example 19.4, “Creating a Shared Mount Point”, assume that
a shared mount point has been previously created by using the following commands as root:

~]# mount --bind /media /media


~]# mount --make-shared /media
~]# mount --bind /media /mnt

To mark the /mnt/ directory as private, type:

~]# mount --make-private /mnt

It is now possible to verify that none of the mounts within /media/ appears in /mnt/. For
example, if the CD-ROM drives contains non-empty media and the /media/cdrom/ directory
exists, run the following commands:

~]# mount /dev/cdrom /media/cdrom


~]# ls /media/cdrom
EFI GPL isolinux LiveOS
~]# ls /mnt/cdrom
~]#

It is also possible to verify that file systems mounted in the /mnt/ directory are not reflected in
/media/. For instance, if a non-empty USB flash drive that uses the /dev/sdc1 device is
plugged in and the /mnt/flashdisk/ directory is present, type:

~]# mount /dev/sdc1 /mnt/flashdisk


~]# ls /media/flashdisk
~]# ls /mnt/flashdisk
en-US publican.cfg

Unbindable Mount
In order to prevent a given mount point from being duplicated whatsoever, an unbindable mount is
used. To change the type of a mount point to an unbindable mount, type the following at a shell
prompt:

mount --make-unbindable mount_point

Alternatively, it is possible to change the mount type for the selected mount point and all mount points
under it:

170
CHAPTER 19. USING THE MOUNT COMMAND

mount --make-runbindable mount_point

See Example 19.7, “Creating an Unbindable Mount Point” for an example usage.

Example 19.7. Creating an Unbindable Mount Point

To prevent the /media/ directory from being shared, as root:

# mount --bind /media /media


# mount --make-unbindable /media

This way, any subsequent attempt to make a duplicate of this mount fails with an error:

# mount --bind /media /mnt


mount: wrong fs type, bad option, bad superblock on /media,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
dmesg | tail or so

19.2.4. Moving a Mount Point


To change the directory in which a file system is mounted, use the following command:

# mount --move old_directory new_directory

See Example 19.8, “Moving an Existing NFS Mount Point” for an example usage.

Example 19.8. Moving an Existing NFS Mount Point

An NFS storage contains user directories and is already mounted in /mnt/userdirs/. As root,
move this mount point to /home by using the following command:

# mount --move /mnt/userdirs /home

To verify the mount point has been moved, list the content of both directories:

# ls /mnt/userdirs
# ls /home
jill joe

19.2.5. Setting Read-only Permissions for root

Sometimes, you need to mount the root file system with read-only permissions. Example use cases
include enhancing security or ensuring data integrity after an unexpected system power-off.

19.2.5.1. Configuring root to Mount with Read-only Permissions on Boot

1. In the /etc/sysconfig/readonly-root file, change READONLY to yes:

171
Storage Administration Guide

# Set to 'yes' to mount the file systems as read-only.


READONLY=yes
[output truncated]

2. Change defaults to ro in the root entry (/) in the /etc/fstab file:

/dev/mapper/luks-c376919e... / ext4 ro,x-systemd.device-timeout=0 1


1

3. Add ro to the GRUB_CMDLINE_LINUX directive in the /etc/default/grub file and ensure


that it does not contain rw:

GRUB_CMDLINE_LINUX="crashkernel=auto rd.lvm.lv=rhel/root
rd.lvm.lv=rhel/swap rhgb quiet ro"

4. Recreate the GRUB2 configuration file:

# grub2-mkconfig -o /boot/grub2/grub.cfg

5. If you need to add files and directories to be mounted with write permissions in the tmpfs file
system, create a text file in the /etc/rwtab.d/ directory and put the configuration there. For
example, to mount /etc/example/file with write permissions, add this line to the
/etc/rwtab.d/example file:

files /etc/example/file

IMPORTANT

Changes made to files and directories in tmpfs do not persist across boots.

See Section 19.2.5.3, “Files and Directories That Retain Write Permissions” for more information
on this step.

6. Reboot the system.

19.2.5.2. Remounting root Instantly

If root (/) was mounted with read-only permissions on system boot, you can remount it with write
permissions:

# mount -o remount,rw /

This can be particularly useful when / is incorrectly mounted with read-only permissions.

To remount / with read-only permissions again, run:

# mount -o remount,ro /

172
CHAPTER 19. USING THE MOUNT COMMAND

NOTE

This command mounts the whole / with read-only permissions. A better approach is to
retain write permissions for certain files and directories by copying them into RAM, as
described in Section 19.2.5.1, “Configuring root to Mount with Read-only Permissions on
Boot”.

19.2.5.3. Files and Directories That Retain Write Permissions

For the system to function properly, some files and directories need to retain write permissions. With root
in read-only mode, they are mounted in RAM in the tmpfs temporary file system. The default set of
such files and directories is read from the /etc/rwtab file, which contains:

dirs /var/cache/man
dirs /var/gdm
[output truncated]
empty /tmp
empty /var/cache/foomatic
[output truncated]
files /etc/adjtime
files /etc/ntp.conf
[output truncated]

Entries in the /etc/rwtab file follow this format:

how the file or directory is copied to tmpfs path to the file or


directory

A file or directory can be copied to tmpfs in the following three ways:

empty path: An empty path is copied to tmpfs. Example: empty /tmp

dirs path: A directory tree is copied to tmpfs, empty. Example: dirs /var/run

files path: A file or a directory tree is copied to tmpfs intact. Example: files
/etc/resolv.conf

The same format applies when adding custom paths to /etc/rwtab.d/.

19.3. UNMOUNTING A FILE SYSTEM


To detach a previously mounted file system, use either of the following variants of the umount
command:

$ umount directory
$ umount device

Note that unless this is performed while logged in as root, the correct permissions must be available to
unmount the file system. For more information, see Section 19.2.2, “Specifying the Mount Options”. See
Example 19.9, “Unmounting a CD” for an example usage.

173
Storage Administration Guide

IMPORTANT

When a file system is in use (for example, when a process is reading a file on this file
system, or when it is used by the kernel), running the umount command fails with an
error. To determine which processes are accessing the file system, use the fuser
command in the following form:

$ fuser -m directory

For example, to list the processes that are accessing a file system mounted to the
/media/cdrom/ directory:

$ fuser -m /media/cdrom
/media/cdrom: 1793 2013 2022 2435 10532c 10672c

Example 19.9. Unmounting a CD

To unmount a CD that was previously mounted to the /media/cdrom/ directory, use the following
command:

$ umount /media/cdrom

19.4. MOUNT COMMAND REFERENCES


The following resources provide an in-depth documentation on the subject.

Manual Page Documentation


man 8 mount: The manual page for the mount command that provides a full documentation on
its usage.

man 8 umount: The manual page for the umount command that provides a full documentation
on its usage.

man 8 findmnt: The manual page for the findmnt command that provides a full
documentation on its usage.

man 5 fstab: The manual page providing a thorough description of the /etc/fstab file
format.

Useful Websites
Shared subtrees — An LWN article covering the concept of shared subtrees.

174
CHAPTER 20. THE VOLUME_KEY FUNCTION

CHAPTER 20. THE VOLUME_KEY FUNCTION


The volume_key function provides two tools, libvolume_key and volume_key. libvolume_key is a library
for manipulating storage volume encryption keys and storing them separately from volumes.
volume_key is an associated command line tool used to extract keys and passphrases in order to
restore access to an encrypted hard drive.

This is useful for when the primary user forgets their keys and passwords, after an employee leaves
abruptly, or in order to extract data after a hardware or software failure corrupts the header of the
encrypted volume. In a corporate setting, the IT help desk can use volume_key to back up the
encryption keys before handing over the computer to the end user.

Currently, volume_key only supports the LUKS volume encryption format.

NOTE

volume_key is not included in a standard install of Red Hat Enterprise Linux 7 server.
For information on installing it, refer to
http://fedoraproject.org/wiki/Disk_encryption_key_escrow_use_cases.

20.1. VOLUME_KEY COMMANDS


The format for volume_key is:

volume_key [OPTION]... OPERAND

The operands and mode of operation for volume_key are determined by specifying one of the following
options:

--save
This command expects the operand volume [packet]. If a packet is provided then volume_key will
extract the keys and passphrases from it. If packet is not provided, then volume_key will extract the
keys and passphrases from the volume, prompting the user where necessary. These keys and
passphrases will then be stored in one or more output packets.

--restore
This command expects the operands volume packet. It then opens the volume and uses the keys and
passphrases in the packet to make the volume accessible again, prompting the user where
necessary, such as allowing the user to enter a new passphrase, for example.

--setup-volume
This command expects the operands volume packet name. It then opens the volume and uses the
keys and passphrases in the packet to set up the volume for use of the decrypted data as name.

Name is the name of a dm-crypt volume. This operation makes the decrypted volume available as
/dev/mapper/name.

This operation does not permanently alter the volume by adding a new passphrase, for example. The
user can access and modify the decrypted volume, modifying volume in the process.

--reencrypt, --secrets, and --dump

175
Storage Administration Guide

These three commands perform similar functions with varying output methods. They each require the
operand packet, and each opens the packet, decrypting it where necessary. --reencrypt then
stores the information in one or more new output packets. --secrets outputs the keys and
passphrases contained in the packet. --dump outputs the content of the packet, though the keys and
passphrases are not output by default. This can be changed by appending --with-secrets to the
command. It is also possible to only dump the unencrypted parts of the packet, if any, by using the --
unencrypted command. This does not require any passphrase or private key access.

Each of these can be appended with the following options:

-o, --output packet


This command writes the default key or passphrase to the packet. The default key or passphrase
depends on the volume format. Ensure it is one that is unlikely to expire, and will allow --restore to
restore access to the volume.

--output-format format
This command uses the specified format for all output packets. Currently, format can be one of the
following:

asymmetric: uses CMS to encrypt the whole packet, and requires a certificate

asymmetric_wrap_secret_only: wraps only the secret, or keys and passphrases, and


requires a certificate

passphrase: uses GPG to encrypt the whole packet, and requires a passphrase

--create-random-passphrase packet
This command generates a random alphanumeric passphrase, adds it to the volume (without
affecting other passphrases), and then stores this random passphrase into the packet.

20.2. USING VOLUME_KEY AS AN INDIVIDUAL USER


As an individual user, volume_key can be used to save encryption keys by using the following
procedure.

NOTE

For all examples in this file, /path/to/volume is a LUKS device, not the plaintext
device contained within. blkid -s type /path/to/volume should report
type="crypto_LUKS".

Procedure 20.1. Using volume_key Stand-alone

1. Run:

volume_key --save /path/to/volume -o escrow-packet

A prompt will then appear requiring an escrow packet passphrase to protect the key.

2. Save the generated escrow-packet file, ensuring that the passphrase is not forgotten.

176
CHAPTER 20. THE VOLUME_KEY FUNCTION

If the volume passphrase is forgotten, use the saved escrow packet to restore access to the data.

Procedure 20.2. Restore Access to Data with Escrow Packet

1. Boot the system in an environment where volume_key can be run and the escrow packet is
available (a rescue mode, for example).

2. Run:

volume_key --restore /path/to/volume escrow-packet

A prompt will appear for the escrow packet passphrase that was used when creating the escrow
packet, and for the new passphrase for the volume.

3. Mount the volume using the chosen passphrase.

To free up the passphrase slot in the LUKS header of the encrypted volume, remove the old, forgotten
passphrase by using the command cryptsetup luksKillSlot.

20.3. USING VOLUME_KEY IN A LARGER ORGANIZATION


In a larger organization, using a single password known by every system administrator and keeping
track of a separate password for each system is impractical and a security risk. To counter this,
volume_key can use asymmetric cryptography to minimize the number of people who know the
password required to access encrypted data on any computer.

This section will cover the procedures required for preparation before saving encryption keys, how to
save encryption keys, restoring access to a volume, and setting up emergency passphrases.

20.3.1. Preparation for Saving Encryption Keys


In order to begin saving encryption keys, some preparation is required.

Procedure 20.3. Preparation

1. Create an X509 certificate/private pair.

2. Designate trusted users who are trusted not to compromise the private key. These users will be
able to decrypt the escrow packets.

3. Choose which systems will be used to decrypt the escrow packets. On these systems, set up an
NSS database that contains the private key.

If the private key was not created in an NSS database, follow these steps:

Store the certificate and private key in an PKCS#12 file.

Run:

certutil -d /the/nss/directory -N

At this point it is possible to choose an NSS database password. Each NSS database can
have a different password so the designated users do not need to share a single password if
a separate NSS database is used by each user.

177
Storage Administration Guide

Run:

pk12util -d /the/nss/directory -i the-pkcs12-file

4. Distribute the certificate to anyone installing systems or saving keys on existing systems.

5. For saved private keys, prepare storage that allows them to be looked up by machine and
volume. For example, this can be a simple directory with one subdirectory per machine, or a
database used for other system management tasks as well.

20.3.2. Saving Encryption Keys


After completing the required preparation (see Section 20.3.1, “Preparation for Saving Encryption Keys”)
it is now possible to save the encryption keys using the following procedure.

NOTE

For all examples in this file, /path/to/volume is a LUKS device, not the plaintext
device contained within; blkid -s type /path/to/volume should report
type="crypto_LUKS".

Procedure 20.4. Saving Encryption Keys

1. Run:

volume_key --save /path/to/volume -c /path/to/cert escrow-packet

2. Save the generated escrow-packet file in the prepared storage, associating it with the system
and the volume.

These steps can be performed manually, or scripted as part of system installation.

20.3.3. Restoring Access to a Volume


After the encryption keys have been saved (see Section 20.3.1, “Preparation for Saving Encryption Keys”
and Section 20.3.2, “Saving Encryption Keys”), access can be restored to a driver where needed.

Procedure 20.5. Restoring Access to a Volume

1. Get the escrow packet for the volume from the packet storage and send it to one of the
designated users for decryption.

2. The designated user runs:

volume_key --reencrypt -d /the/nss/directory escrow-packet-in -o


escrow-packet-out

After providing the NSS database password, the designated user chooses a passphrase for
encrypting escrow-packet-out. This passphrase can be different every time and only
protects the encryption keys while they are moved from the designated user to the target
system.

3. Obtain the escrow-packet-out file and the passphrase from the designated user.

178
CHAPTER 20. THE VOLUME_KEY FUNCTION

4. Boot the target system in an environment that can run volume_key and have the escrow-
packet-out file available, such as in a rescue mode.

5. Run:

volume_key --restore /path/to/volume escrow-packet-out

A prompt will appear for the packet passphrase chosen by the designated user, and for a new
passphrase for the volume.

6. Mount the volume using the chosen volume passphrase.

It is possible to remove the old passphrase that was forgotten by using cryptsetup luksKillSlot,
for example, to free up the passphrase slot in the LUKS header of the encrypted volume. This is done
with the command cryptsetup luksKillSlot device key-slot. For more information and
examples see cryptsetup --help.

20.3.4. Setting up Emergency Passphrases


In some circumstances (such as traveling for business) it is impractical for system administrators to work
directly with the affected systems, but users still need access to their data. In this case, volume_key can
work with passphrases as well as encryption keys.

During the system installation, run:

volume_key --save /path/to/volume -c /path/to/ert --create-random-


passphrase passphrase-packet

This generates a random passphrase, adds it to the specified volume, and stores it to passphrase-
packet. It is also possible to combine the --create-random-passphrase and -o options to
generate both packets at the same time.

If a user forgets the password, the designated user runs:

volume_key --secrets -d /your/nss/directory passphrase-packet

This shows the random passphrase. Give this passphrase to the end user.

20.4. VOLUME_KEY REFERENCES


More information on volume_key can be found:

in the readme file located at /usr/share/doc/volume_key-*/README

on volume_key's manpage using man volume_key

online at http://fedoraproject.org/wiki/Disk_encryption_key_escrow_use_cases

179
Storage Administration Guide

CHAPTER 21. SOLID-STATE DISK DEPLOYMENT GUIDELINES


Solid-state disks (SSD) are storage devices that use NAND flash chips to persistently store data. This
sets them apart from previous generations of disks, which store data in rotating, magnetic platters. In an
SSD, the access time for data across the full Logical Block Address (LBA) range is constant; whereas
with older disks that use rotating media, access patterns that span large address ranges incur seek
costs. As such, SSD devices have better latency and throughput.

Performance degrades as the number of used blocks approaches the disk capacity. The degree of
performance impact varies greatly by vendor. However, all devices experience some degradation.

To address the degradation issue, the host system (for example, the Linux kernel) may use discard
requests to inform the storage that a given range of blocks is no longer in use. An SSD can use this
information to free up space internally, using the free blocks for wear-leveling. Discards will only be
issued if the storage advertises support in terms of its storage protocol (be it ATA or SCSI). Discard
requests are issued to the storage using the negotiated discard command specific to the storage protocol
(TRIM command for ATA, and WRITE SAME with UNMAP set, or UNMAP command for SCSI).

Enabling discard support is most useful when the following points are true:

Free space is still available on the file system.

Most logical blocks on the underlying storage device have already been written to.

For more information about TRIM, see Data Set Management T13 Specifications.

For more information about UNMAP, see the section 4.7.3.4 of the SCSI Block Commands 3 T10
Specification.

NOTE

Not all solid-state devices in the market have discard support. To determine if your
solid-state device has discard support, check for
/sys/block/sda/queue/discard_granularity, which is the size of internal
allocation unit of device.

Deployment Considerations
Because of the internal layout and operation of SSDs, it is best to partition devices on an internal erase
block boundary. Partitioning utilities in Red Hat Enterprise Linux 7 chooses sane defaults if the SSD
exports topology information. However, if the device does not export topology information, Red Hat
recommends that the first partition should be created at a 1MB boundary.

SSD has various types of TRIM mechanism depending on the vendors choice. The early versions of
disks improved the performance by compromising possible data leakage after the read command.

Following are the types of TRIM mechanism:

Non-deterministic TRIM

Deterministic TRIM (DRAT)

Deterministic Read Zero after TRIM (RZAT)

The first two types of TRIM mechanism can cause data leakage as the read command to the LBA after a
TRIM returns different or same data. RZAT returns zero after the read command and Red Hat

180
CHAPTER 21. SOLID-STATE DISK DEPLOYMENT GUIDELINES

recommends this TRIM mechanism to avoid data leakage. It is affected only in SSD. Choose the disk
which supports RZAT mechanism.

Type of TRIM mechanism used depends on hardware implementation. To find the type of TRIM
mechanism on ATA, use the hdparm command. See the following example to find the type of TRIM
mechanism:

# hdparm -I /dev/sda | grep TRIM


Data Set Management TRIM supported (limit 8 block)
Deterministic read data after TRIM

For more information, see man hdparm.

The Logical Volume Manager (LVM), the device-mapper (DM) targets, and MD (software raid) targets
that LVM uses support discards. The only DM targets that do not support discards are dm-snapshot, dm-
crypt, and dm-raid45. Discard support for the dm-mirror was added in Red Hat Enterprise Linux 6.1 and
as of 7.0 MD supports discards.

Using RAID level 5 over SSD results in low performance if SSDs do not handle discard correctly. You
can set discard in the raid456.conf file, or in the GRUB2 configuration. For instructions, see the
following procedures.

Procedure 21.1. Setting discard in raid456.conf

The devices_handle_discard_safely module parameter is set in the raid456 module. To enable


discard in the raid456.conf file:

1. Verify that your hardware supports discards:

# cat /sys/block/disk-name/queue/discard_zeroes_data

If the returned value is 1, discards are supported. If the command returns 0, the RAID code has
to zero the disk out, which takes more time.

2. Create the /etc/modprobe.d/raid456.conf file, and include the following line:

options raid456 devices_handle_discard_safely=Y

3. Use the dracut -f command to rebuild the initial ramdisk (initrd).

4. Reboot the system for the changes to take effect.

Procedure 21.2. Setting discard in the GRUB2 Configuration

The devices_handle_discard_safely module parameter is set in the raid456 module. To enable


discard in the GRUB2 configuration:

1. Verify that your hardware supports discards:

# cat /sys/block/disk-name/queue/discard_zeroes_data

If the returned value is 1, discards are supported. If the command returns 0, the RAID code has
to zero the disk out, which takes more time.

181
Storage Administration Guide

2. Add the following line to the /etc/default/grub file:

raid456.devices_handle_discard_safely=Y

3. The location of the GRUB2 configuration file is different on systems with the BIOS firmware and
on systems with UEFI. Use one of the following commands to recreate the GRUB2 configuration
file.

On a system with the BIOS firmware, use:

# grub2-mkconfig -o /boot/grub2/grub.cfg

On a system with the UEFI firmware, use:

# grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg

4. Reboot the system for the changes to take effect.

NOTE

In Red Hat Enterprise Linux 7, discard is fully supported by the ext4 and XFS file systems
only.

In Red Hat Enterprise Linux 6.3 and earlier, only the ext4 file system fully supports discard. Starting with
Red Hat Enterprise Linux 6.4, both ext4 and XFS file systems fully support discard. To enable discard
commands on a device, use the discard option of the mount command. For example, to mount
/dev/sda2 to /mnt with discard enabled, use:

# mount -t ext4 -o discard /dev/sda2 /mnt

By default, ext4 does not issue the discard command to, primarily, avoid problems on devices which
might not properly implement discard. The Linux swap code issues discard commands to discard-
enabled devices, and there is no option to control this behavior.

Performance Tuning Considerations


For information on performance tuning considerations regarding solid-state disks, see the Solid-State
Disks section in the Red Hat Enterprise Linux 7 Performance Tuning Guide.

182
CHAPTER 22. WRITE BARRIERS

CHAPTER 22. WRITE BARRIERS


A write barrier is a kernel mechanism used to ensure that file system metadata is correctly written and
ordered on persistent storage, even when storage devices with volatile write caches lose power. File
systems with write barriers enabled ensures that data transmitted via fsync() is persistent throughout a
power loss.

Enabling write barriers incurs a substantial performance penalty for some applications. Specifically,
applications that use fsync() heavily or create and delete many small files will likely run much slower.

22.1. IMPORTANCE OF WRITE BARRIERS


File systems safely update metadata, ensuring consistency. Journalled file systems bundle metadata
updates into transactions and send them to persistent storage in the following manner:

1. The file system sends the body of the transaction to the storage device.

2. The file system sends a commit block.

3. If the transaction and its corresponding commit block are written to disk, the file system assumes
that the transaction will survive any power failure.

However, file system integrity during power failure becomes more complex for storage devices with extra
caches. Storage target devices like local S-ATA or SAS drives may have write caches ranging from
32MB to 64MB in size (with modern drives). Hardware RAID controllers often contain internal write
caches. Further, high end arrays, like those from NetApp, IBM, Hitachi and EMC (among others), also
have large caches.

Storage devices with write caches report I/O as "complete" when the data is in cache; if the cache loses
power, it loses its data as well. Worse, as the cache de-stages to persistent storage, it may change the
original metadata ordering. When this occurs, the commit block may be present on disk without having
the complete, associated transaction in place. As a result, the journal may replay these uninitialized
transaction blocks into the file system during post-power-loss recovery; this will cause data inconsistency
and corruption.

How Write Barriers Work


Write barriers are implemented in the Linux kernel via storage write cache flushes before and after the
I/O, which is order-critical. After the transaction is written, the storage cache is flushed, the commit block
is written, and the cache is flushed again. This ensures that:

The disk contains all the data.

No re-ordering has occurred.

With barriers enabled, an fsync() call also issues a storage cache flush. This guarantees that file data
is persistent on disk even if power loss occurs shortly after fsync() returns.

22.2. ENABLING AND DISABLING WRITE BARRIERS


To mitigate the risk of data corruption during power loss, some storage devices use battery-backed write
caches. Generally, high-end arrays and some hardware controllers use battery-backed write caches.
However, because the cache's volatility is not visible to the kernel, Red Hat Enterprise Linux 7 enables
write barriers by default on all supported journaling file systems.

183
Storage Administration Guide

NOTE

Write caches are designed to increase I/O performance. However, enabling write barriers
means constantly flushing these caches, which can significantly reduce performance.

For devices with non-volatile, battery-backed write caches and those with write-caching disabled, you
can safely disable write barriers at mount time using the -o nobarrier option for mount. However,
some devices do not support write barriers; such devices log an error message to
/var/log/messages. For more information, see Table 22.1, “Write Barrier Error Messages per File
System”.

Table 22.1. Write Barrier Error Messages per File System

File System Error Message

ext3/ext4 JBD: barrier-based sync failed on


device - disabling barriers

XFS Filesystem device - Disabling


barriers, trial barrier write
failed

btrfs btrfs: disabling barriers on dev


device

22.3. WRITE BARRIER CONSIDERATIONS


Some system configurations do not need write barriers to protect data. In most cases, other methods are
preferable to write barriers, since enabling write barriers causes a significant performance penalty.

Disabling Write Caches


One way to alternatively avoid data integrity issues is to ensure that no write caches lose data on power
failures. When possible, the best way to configure this is to disable the write cache. On a simple server
or desktop with one or more SATA drives (off a local SATA controller Intel AHCI part), you can disable
the write cache on the target SATA drives with the following command:

# hdparm -W0 /device/

Battery-Backed Write Caches


Write barriers are also unnecessary whenever the system uses hardware RAID controllers with battery-
backed write cache. If the system is equipped with such controllers and if its component drives have write
caches disabled, the controller acts as a write-through cache; this informs the kernel that the write cache
data survives a power loss.

Most controllers use vendor-specific tools to query and manipulate target drives. For example, the LSI
Megaraid SAS controller uses a battery-backed write cache; this type of controller requires the
MegaCli64 tool to manage target drives. To show the state of all back-end drives for LSI Megaraid
SAS, use:

184
CHAPTER 22. WRITE BARRIERS

# MegaCli64 -LDGetProp -DskCache -LAll -aALL

To disable the write cache of all back-end drives for LSI Megaraid SAS, use:

# MegaCli64 -LDSetProp -DisDskCache -Lall -aALL

NOTE

Hardware RAID cards recharge their batteries while the system is operational. If a system
is powered off for an extended period of time, the batteries will lose their charge, leaving
stored data vulnerable during a power failure.

High-End Arrays
High-end arrays have various ways of protecting data in the event of a power failure. As such, there is no
need to verify the state of the internal drives in external RAID storage.

NFS
NFS clients do not need to enable write barriers, since data integrity is handled by the NFS server side.
As such, NFS servers should be configured to ensure data persistence throughout a power loss (whether
through write barriers or other means).

185
Storage Administration Guide

CHAPTER 23. STORAGE I/O ALIGNMENT AND SIZE


Recent enhancements to the SCSI and ATA standards allow storage devices to indicate their preferred
(and in some cases, required) I/O alignment and I/O size. This information is particularly useful with
newer disk drives that increase the physical sector size from 512 bytes to 4k bytes. This information may
also be beneficial for RAID devices, where the chunk size and stripe size may impact performance.

The Linux I/O stack has been enhanced to process vendor-provided I/O alignment and I/O size
information, allowing storage management tools (parted, lvm, mkfs.*, and the like) to optimize data
placement and access. If a legacy device does not export I/O alignment and size data, then storage
management tools in Red Hat Enterprise Linux 7 will conservatively align I/O on a 4k (or larger power of
2) boundary. This will ensure that 4k-sector devices operate correctly even if they do not indicate any
required/preferred I/O alignment and size.

For information on determining the information that the operating system obtained from the device, see
the Section 23.2, “Userspace Access”. This data is subsequently used by the storage management tools
to determine data placement.

The IO scheduler has changed for Red Hat Enterprise Linux 7. Default IO Scheduler is now Deadline,
except for SATA drives. CFQ is the default IO scheduler for SATA drives. For faster storage, Deadline
outperforms CFQ and when it is used there is a performance increase without the need of special tuning.

If default is not right for some disks (for example, SAS rotational disks), then change the IO scheduler to
CFQ. This instance will depend on the workload.

23.1. PARAMETERS FOR STORAGE ACCESS


The operating system uses the following information to determine I/O alignment and size:

physical_block_size
Smallest internal unit on which the device can operate

logical_block_size
Used externally to address a location on the device

alignment_offset
The number of bytes that the beginning of the Linux block device (partition/MD/LVM device) is offset
from the underlying physical alignment

minimum_io_size
The device’s preferred minimum unit for random I/O

optimal_io_size
The device’s preferred unit for streaming I/O

For example, certain 4K sector devices may use a 4K physical_block_size internally but expose a
more granular 512-byte logical_block_size to Linux. This discrepancy introduces potential for
misaligned I/O. To address this, the Red Hat Enterprise Linux 7 I/O stack will attempt to start all data
areas on a naturally-aligned boundary (physical_block_size) by making sure it accounts for any
alignment_offset if the beginning of the block device is offset from the underlying physical alignment.

Storage vendors can also supply I/O hints about the preferred minimum unit for random I/O

186
CHAPTER 23. STORAGE I/O ALIGNMENT AND SIZE

(minimum_io_size) and streaming I/O (optimal_io_size) of a device. For example,


minimum_io_size and optimal_io_size may correspond to a RAID device's chunk size and stripe
size respectively.

23.2. USERSPACE ACCESS


Always take care to use properly aligned and sized I/O. This is especially important for Direct I/O access.
Direct I/O should be aligned on a logical_block_size boundary, and in multiples of the
logical_block_size.

With native 4K devices (i.e. logical_block_size is 4K) it is now critical that applications perform
direct I/O in multiples of the device's logical_block_size. This means that applications will fail with
native 4k devices that perform 512-byte aligned I/O rather than 4k-aligned I/O.

To avoid this, an application should consult the I/O parameters of a device to ensure it is using the
proper I/O alignment and size. As mentioned earlier, I/O parameters are exposed through the both
sysfs and block device ioctl interfaces.

For more information, see man libblkid. This man page is provided by the libblkid-devel
package.

sysfs Interface
/sys/block/disk/alignment_offset

or

/sys/block/disk/partition/alignment_offset

NOTE

The file location depends on whether the disk is a physical disk (be that a local
disk, local RAID, or a multipath LUN) or a virtual disk. The first file location is
applicable to physical disks while the second file location is applicable to virtual
disks. The reason for this is because virtio-blk will always report an alignment
value for the partition. Physical disks may or may not report an alignment value.

/sys/block/disk/queue/physical_block_size

/sys/block/disk/queue/logical_block_size

/sys/block/disk/queue/minimum_io_size

/sys/block/disk/queue/optimal_io_size

The kernel will still export these sysfs attributes for "legacy" devices that do not provide I/O parameters
information, for example:

Example 23.1. sysfs Interface

alignment_offset: 0
physical_block_size: 512
logical_block_size: 512

187
Storage Administration Guide

minimum_io_size: 512
optimal_io_size: 0

Block Device ioctls


BLKALIGNOFF: alignment_offset

BLKPBSZGET: physical_block_size

BLKSSZGET: logical_block_size

BLKIOMIN: minimum_io_size

BLKIOOPT: optimal_io_size

23.3. I/O STANDARDS


This section describes I/O standards used by ATA and SCSI devices.

ATA
ATA devices must report appropriate information via the IDENTIFY DEVICE command. ATA devices
only report I/O parameters for physical_block_size, logical_block_size, and
alignment_offset. The additional I/O hints are outside the scope of the ATA Command Set.

SCSI
I/O parameters support in Red Hat Enterprise Linux 7 requires at least version 3 of the SCSI Primary
Commands (SPC-3) protocol. The kernel will only send an extended inquiry (which gains access to the
BLOCK LIMITS VPD page) and READ CAPACITY(16) command to devices which claim compliance
with SPC-3.

The READ CAPACITY(16) command provides the block sizes and alignment offset:

LOGICAL BLOCK LENGTH IN BYTES is used to derive


/sys/block/disk/queue/physical_block_size

LOGICAL BLOCKS PER PHYSICAL BLOCK EXPONENT is used to derive


/sys/block/disk/queue/logical_block_size

LOWEST ALIGNED LOGICAL BLOCK ADDRESS is used to derive:

/sys/block/disk/alignment_offset

/sys/block/disk/partition/alignment_offset

The BLOCK LIMITS VPD page (0xb0) provides the I/O hints. It also uses OPTIMAL TRANSFER
LENGTH GRANULARITY and OPTIMAL TRANSFER LENGTH to derive:

/sys/block/disk/queue/minimum_io_size

/sys/block/disk/queue/optimal_io_size

188
CHAPTER 23. STORAGE I/O ALIGNMENT AND SIZE

The sg3_utils package provides the sg_inq utility, which can be used to access the BLOCK LIMITS
VPD page. To do so, run:

# sg_inq -p 0xb0 disk

23.4. STACKING I/O PARAMETERS


All layers of the Linux I/O stack have been engineered to propagate the various I/O parameters up the
stack. When a layer consumes an attribute or aggregates many devices, the layer must expose
appropriate I/O parameters so that upper-layer devices or tools will have an accurate view of the storage
as it transformed. Some practical examples are:

Only one layer in the I/O stack should adjust for a non-zero alignment_offset; once a layer
adjusts accordingly, it will export a device with an alignment_offset of zero.

A striped Device Mapper (DM) device created with LVM must export a minimum_io_size and
optimal_io_size relative to the stripe count (number of disks) and user-provided chunk size.

In Red Hat Enterprise Linux 7, Device Mapper and Software Raid (MD) device drivers can be used to
arbitrarily combine devices with different I/O parameters. The kernel's block layer will attempt to
reasonably combine the I/O parameters of the individual devices. The kernel will not prevent combining
heterogeneous devices; however, be aware of the risks associated with doing so.

For instance, a 512-byte device and a 4K device may be combined into a single logical DM device, which
would have a logical_block_size of 4K. File systems layered on such a hybrid device assume that
4K will be written atomically, but in reality it will span 8 logical block addresses when issued to the 512-
byte device. Using a 4K logical_block_size for the higher-level DM device increases potential for a
partial write to the 512-byte device if there is a system crash.

If combining the I/O parameters of multiple devices results in a conflict, the block layer may issue a
warning that the device is susceptible to partial writes and/or is misaligned.

23.5. LOGICAL VOLUME MANAGER


LVM provides userspace tools that are used to manage the kernel's DM devices. LVM will shift the start
of the data area (that a given DM device will use) to account for a non-zero alignment_offset
associated with any device managed by LVM. This means logical volumes will be properly aligned
(alignment_offset=0).

By default, LVM will adjust for any alignment_offset, but this behavior can be disabled by setting
data_alignment_offset_detection to 0 in /etc/lvm/lvm.conf. Disabling this is not
recommended.

LVM will also detect the I/O hints for a device. The start of a device's data area will be a multiple of the
minimum_io_size or optimal_io_size exposed in sysfs. LVM will use the minimum_io_size if
optimal_io_size is undefined (i.e. 0).

By default, LVM will automatically determine these I/O hints, but this behavior can be disabled by setting
data_alignment_detection to 0 in /etc/lvm/lvm.conf. Disabling this is not recommended.

23.6. PARTITION AND FILE SYSTEM TOOLS

189
Storage Administration Guide

This section describes how different partition and file system management tools interact with a device's
I/O parameters.

util-linux-ng's libblkid and fdisk


The libblkid library provided with the util-linux-ng package includes a programmatic API to
access a device's I/O parameters. libblkid allows applications, especially those that use Direct I/O, to
properly size their I/O requests. The fdisk utility from util-linux-ng uses libblkid to determine
the I/O parameters of a device for optimal placement of all partitions. The fdisk utility will align all
partitions on a 1MB boundary.

parted and libparted


The libparted library from parted also uses the I/O parameters API of libblkid. Anaconda, the
Red Hat Enterprise Linux 7 installer, uses libparted, which means that all partitions created by either
the installer or parted will be properly aligned. For all partitions created on a device that does not
appear to provide I/O parameters, the default alignment will be 1MB.

The heuristics parted uses are as follows:

Always use the reported alignment_offset as the offset for the start of the first primary
partition.

If optimal_io_size is defined (i.e. not 0), align all partitions on an optimal_io_size


boundary.

If optimal_io_size is undefined (i.e. 0), alignment_offset is 0, and minimum_io_size


is a power of 2, use a 1MB default alignment.

This is the catch-all for "legacy" devices which don't appear to provide I/O hints. As such, by
default all partitions will be aligned on a 1MB boundary.

NOTE

Red Hat Enterprise Linux 7 cannot distinguish between devices that don't provide
I/O hints and those that do so with alignment_offset=0 and
optimal_io_size=0. Such a device might be a single SAS 4K device; as such,
at worst 1MB of space is lost at the start of the disk.

File System Tools


The different mkfs.filesystem utilities have also been enhanced to consume a device's I/O
parameters. These utilities will not allow a file system to be formatted to use a block size smaller than the
logical_block_size of the underlying storage device.

Except for mkfs.gfs2, all other mkfs.filesystem utilities also use the I/O hints to layout on-disk data
structure and data areas relative to the minimum_io_size and optimal_io_size of the underlying
storage device. This allows file systems to be optimally formatted for various RAID (striped) layouts.

190
CHAPTER 24. SETTING UP A REMOTE DISKLESS SYSTEM

CHAPTER 24. SETTING UP A REMOTE DISKLESS SYSTEM


To set up a basic remote diskless system booted over PXE, you need the following packages:

tftp-server

xinetd

dhcp

syslinux

dracut-network

NOTE

After installing the dracut-network package, add the following line to


/etc/dracut.conf:

add_dracutmodules+="nfs"

Remote diskless system booting requires both a tftp service (provided by tftp-server) and a DHCP
service (provided by dhcp). The tftp service is used to retrieve kernel image and initrd over the
network via the PXE loader.

NOTE

SELinux is only supported over NFSv4.2. To use SELinux, NFS must be explicitly
enabled in /etc/sysconfig/nfs by adding the line:

RPCNFSDARGS="-V 4.2"

Then, in /var/lib/tftpboot/pxelinux.cfg/default, change


root=nfs:server-ip:/exported/root/directory to root=nfs:server-
ip:/exported/root/directory,vers=4.2.

Finally, reboot the NFS server.

The following sections outline the necessary procedures for deploying remote diskless systems in a
network environment.

IMPORTANT

Some RPM packages have started using file capabilities (such as setcap and getcap).
However, NFS does not currently support these so attempting to install or update any
packages that use file capabilities will fail.

24.1. CONFIGURING A TFTP SERVICE FOR DISKLESS CLIENTS


Prerequisites
Install the necessary packages. See Chapter 24, Setting up a Remote Diskless System

191
Storage Administration Guide

Procedure
To configure tftp, perform the following steps:

Procedure 24.1. To Configure tftp

1. Enable PXE booting over the network:

# systemctl enable --now tftp

2. The tftp root directory (chroot) is located in /var/lib/tftpboot. Copy


/usr/share/syslinux/pxelinux.0 to /var/lib/tftpboot/:

# cp /usr/share/syslinux/pxelinux.0 /var/lib/tftpboot/

3. Create a pxelinux.cfg directory inside the tftp root directory:

# mkdir -p /var/lib/tftpboot/pxelinux.cfg/

4. Configure firewall rules to allow tftp traffic.

As tftp supports TCP wrappers, you can configure host access to tftp in the
/etc/hosts.allow configuration file. For more information on configuring TCP wrappers and
the /etc/hosts.allow configuration file, see the Red Hat Enterprise Linux 7 Security Guide.
The hosts_access(5) also provides information about /etc/hosts.allow.

Next Steps
After configuring tftp for diskless clients, configure DHCP, NFS, and the exported file system
accordingly. For instructions on configuring the DHCP, NFS, and the exported file system, see
Section 24.2, “Configuring DHCP for Diskless Clients” and Section 24.3, “Configuring an Exported File
System for Diskless Clients”.

24.2. CONFIGURING DHCP FOR DISKLESS CLIENTS


Prerequisites
Install the necessary packages. See Chapter 24, Setting up a Remote Diskless System

Configure the tftp service. See Section 24.1, “Configuring a tftp Service for Diskless Clients”.

Procedure
1. After configuring a tftp server, you need to set up a DHCP service on the same host machine.
For instructions on setting up a DHCP server, see the Configuring a DHCP Server.

2. Enable PXE booting on the DHCP server by adding the following configuration to
/etc/dhcp/dhcp.conf:

allow booting;
allow bootp;
class "pxeclients" {
match if substring(option vendor-class-identifier, 0, 9) =
"PXEClient";

192
CHAPTER 24. SETTING UP A REMOTE DISKLESS SYSTEM

next-server server-ip;
filename "pxelinux.0";
}

Replace server-ip with the IP address of the host machine on which the tftp and DHCP
services reside.

NOTE

When libvirt virtual machines are used as the diskless client, libvirt
provides the DHCP service and the stand alone DHCP server is not used. In this
situation, network booting must be enabled with the bootp file='filename'
option in the libvirt network configuration, virsh net-edit.

Next Steps
Now that tftp and DHCP are configured, configure NFS and the exported file system. For instructions,
see the Section 24.3, “Configuring an Exported File System for Diskless Clients”.

24.3. CONFIGURING AN EXPORTED FILE SYSTEM FOR DISKLESS


CLIENTS
Prerequisites
Install the necessary packages. See Chapter 24, Setting up a Remote Diskless System

Configure the tftp service. See Section 24.1, “Configuring a tftp Service for Diskless Clients”.

Configure DHCP. See Section 24.2, “Configuring DHCP for Diskless Clients”.

Procedure
1. The root directory of the exported file system (used by diskless clients in the network) is shared
via NFS. Configure the NFS service to export the root directory by adding it to /etc/exports.
For instructions on how to do so, see the Section 8.7.1, “The /etc/exports Configuration
File”.

2. To accommodate completely diskless clients, the root directory should contain a complete
Red Hat Enterprise Linux installation. You can either clone an existing installation or install a
new base system:

To synchronize with a running system, use the rsync utility:

# rsync -a -e ssh --exclude='/proc/*' --exclude='/sys/*' \


hostname.com:/exported-root-directory

Replace hostname.com with the hostname of the running system with which to
synchronize via rsync.

Replace exported-root-directory with the path to the exported file system.

To install Red Hat Enterprise Linux to the exported location, use the yum utility with the --
installroot option:

193
Storage Administration Guide

# yum install @Base kernel dracut-network nfs-utils \ --


installroot=exported-root-directory --releasever=/

The file system to be exported still needs to be configured further before it can be used by diskless
clients. To do this, perform the following procedure:

Procedure 24.2. Configure File System

1. Select the kernel that diskless clients should use (vmlinuz-kernel-version) and copy it to
the tftp boot directory:

# cp /boot/vmlinuz-kernel-version /var/lib/tftpboot/

2. Create the initrd (that is, initramfs-kernel-version.img) with network support:

# dracut initramfs-kernel-version.img kernel-version

3. Change the initrd's file permissions to 644 using the following command:

# chmod 644 initramfs-kernel-version.img


WARNING

If the initrd's file permissions are not changed, the pxelinux.0 boot loader
will fail with a "file not found" error.

4. Copy the resulting initramfs-kernel-version.img into the tftp boot directory as well.

5. Edit the default boot configuration to use the initrd and kernel in the /var/lib/tftpboot/
directory. This configuration should instruct the diskless client's root to mount the exported file
system (/exported/root/directory) as read-write. Add the following configuration in the
/var/lib/tftpboot/pxelinux.cfg/default file:

default rhel7

label rhel7
kernel vmlinuz-kernel-version
append initrd=initramfs-kernel-version.img root=nfs:server-
ip:/exported/root/directory rw

Replace server-ip with the IP address of the host machine on which the tftp and DHCP
services reside.

The NFS share is now ready for exporting to diskless clients. These clients can boot over the network via
PXE.

194
CHAPTER 25. ONLINE STORAGE MANAGEMENT

CHAPTER 25. ONLINE STORAGE MANAGEMENT


It is often desirable to add, remove or re-size storage devices while the operating system is running, and
without rebooting. This chapter outlines the procedures that may be used to reconfigure storage devices
on Red Hat Enterprise Linux 7 host systems while the system is running. It covers iSCSI and Fibre
Channel storage interconnects; other interconnect types may be added it the future.

This chapter focuses on adding, removing, modifying, and monitoring storage devices. It does not
discuss the Fibre Channel or iSCSI protocols in detail. For more information about these protocols, refer
to other documentation.

This chapter makes reference to various sysfs objects. Red Hat advises that the sysfs object names
and directory structure are subject to change in major Red Hat Enterprise Linux releases. This is
because the upstream Linux kernel does not provide a stable internal API. For guidelines on how to
reference sysfs objects in a transportable way, refer to the document /usr/share/doc/kernel-
doc-version/Documentation/sysfs-rules.txt in the kernel source tree for guidelines.


WARNING

Online storage reconfiguration must be done carefully. System failures or


interruptions during the process can lead to unexpected results. Red Hat advises
that you reduce system load to the maximum extent possible during the change
operations. This will reduce the chance of I/O errors, out-of-memory errors, or
similar errors occurring in the midst of a configuration change. The following
sections provide more specific guidelines regarding this.

In addition, Red Hat recommends that you back up all data before reconfiguring
online storage.

25.1. TARGET SETUP


Red Hat Enterprise Linux 7 uses the targetcli shell as a front end for viewing, editing, and saving the
configuration of the Linux-IO Target without the need to manipulate the kernel target's configuration files
directly. The targetcli tool is a command-line interface that allows an administrator to export local
storage resources, which are backed by either files, volumes, local SCSI devices, or RAM disks, to
remote systems. The targetcli tool has a tree-based layout, includes built-in tab completion, and
provides full auto-complete support and inline documentation.

The hierarchy of targetcli does not always match the kernel interface exactly because targetcli is
simplified where possible.

IMPORTANT

To ensure that the changes made in targetcli are persistent, start and enable the
target service:

# systemctl start target


# systemctl enable target

195
Storage Administration Guide

25.1.1. Installing and Running targetcli


To install targetcli, use:

# yum install targetcli

Start the target service:

# systemctl start target

Configure target to start at boot time:

# systemctl enable target

Open port 3260 in the firewall and reload the firewall configuration:

# firewall-cmd --permanent --add-port=3260/tcp


Success
# firewall-cmd --reload
Success

Use the targetcli command, and then use the ls command for the layout of the tree interface:

# targetcli
:
/> ls
o- /........................................[...]
o- backstores.............................[...]
| o- block.................[Storage Objects: 0]
| o- fileio................[Storage Objects: 0]
| o- pscsi.................[Storage Objects: 0]
| o- ramdisk...............[Storage Ojbects: 0]
o- iscsi...........................[Targets: 0]
o- loopback........................[Targets: 0]

NOTE

In Red Hat Enterprise Linux 7.0, using the targetcli command from Bash, for example,
targetcli iscsi/ create, does not work and does not return an error. Starting with
Red Hat Enterprise Linux 7.1, an error status code is provided to make using targetcli
with shell scripts more useful.

25.1.2. Creating a Backstore


Backstores enable support for different methods of storing an exported LUN's data on the local machine.
Creating a storage object defines the resources the backstore uses.

196
CHAPTER 25. ONLINE STORAGE MANAGEMENT

NOTE

In Red Hat Enterprise Linux 6, the term 'backing-store' is used to refer to the mappings
created. However, to avoid confusion between the various ways 'backstores' can be used,
in Red Hat Enterprise Linux 7 the term 'storage objects' refers to the mappings created
and 'backstores' is used to describe the different types of backing devices.

The backstore devices that LIO supports are:

FILEIO (Linux file-backed storage)


FILEIO storage objects can support either write_back or write_thru operation. The
write_back enables the local file system cache. This improves performance but increases the risk
of data loss. It is recommended to use write_back=false to disable write_back in favor of
write_thru.

To create a fileio storage object, run the command /backstores/fileio create file_name
file_location file_size write_back=false. For example:

/> /backstores/fileio create file1 /tmp/disk1.img 200M write_back=false


Created fileio file1 with size 209715200

BLOCK (Linux BLOCK devices)


The block driver allows the use of any block device that appears in the /sys/block to be used with
LIO. This includes physical devices (for example, HDDs, SSDs, CDs, DVDs) and logical devices (for
example, software or hardware RAID volumes, or LVM volumes).

NOTE

BLOCK backstores usually provide the best performance.

To create a BLOCK backstore using any block device, use the following command:

# fdisk /dev/vdb
Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table


Building a new DOS disklabel with disk identifier 0x39dc48fb.

Command (m for help): n


Partition type:
p primary (0 primary, 0 extended, 4 free)
e extended
Select (default p): *Enter*
Using default response p
Partition number (1-4, default 1): *Enter*
First sector (2048-2097151, default 2048): *Enter*
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-2097151, default 2097151):
+250M
Partition 1 of type Linux and of size 250 MiB is set

197
Storage Administration Guide

Command (m for help): w


The partition table has been altered!

Calling ioctl() to re-read partition table.


Syncing disks.

/> /backstores/block create name=block_backend dev=/dev/vdb


Generating a wwn serial.
Created block storage object block_backend using /dev/vdb.

NOTE

You can also create a BLOCK backstore on a logical volume.

PSCSI (Linux pass-through SCSI devices)


Any storage object that supports direct pass-through of SCSI commands without SCSI emulation,
and with an underlying SCSI device that appears with lsscsi in /proc/scsi/scsi (such as a SAS
hard drive) can be configured as a backstore. SCSI-3 and higher is supported with this subsystem.


WARNING

PSCSI should only be used by advanced users. Advanced SCSI commands


such as for Aysmmetric Logical Unit Assignment (ALUAs) or Persistent
Reservations (for example, those used by VMware ESX, and vSphere) are
usually not implemented in the device firmware and can cause malfunctions or
crashes. When in doubt, use BLOCK for production setups instead.

To create a PSCSI backstore for a physical SCSI device, a TYPE_ROM device using /dev/sr0 in
this example, use:

/> backstores/pscsi/ create name=pscsi_backend dev=/dev/sr0


Generating a wwn serial.
Created pscsi storage object pscsi_backend using /dev/sr0

Memory Copy RAM disk (Linux RAMDISK_MCP)


Memory Copy RAM disks (ramdisk) provide RAM disks with full SCSI emulation and separate
memory mappings using memory copy for initiators. This provides capability for multi-sessions and is
particularly useful for fast, volatile mass storage for production purposes.

To create a 1GB RAM disk backstore, use the following command:

/> backstores/ramdisk/ create name=rd_backend size=1GB


Generating a wwn serial.
Created rd_mcp ramdisk rd_backend with size 1GB.

198
CHAPTER 25. ONLINE STORAGE MANAGEMENT

25.1.3. Creating an iSCSI Target


To create an iSCSI target:

Procedure 25.1. Creating an iSCSI target

1. Run targetcli.

2. Move into the iSCSI configuration path:

/> iscsi/

NOTE

The cd command is also accepted to change directories, as well as simply listing


the path to move into.

3. Create an iSCSI target using a default target name.

/iscsi> create
Created target
iqn.2003-01.org.linux-iscsi.hostname.x8664:sn.78b473f296ff
Created TPG1

Or create an iSCSI target using a specified name.

/iscsi > create iqn.2006-04.com.example:444


Created target iqn.2006-04.com.example:444
Created TPG1

4. Verify that the newly created target is visible when targets are listed with ls.

/iscsi > ls
o- iscsi.......................................[1 Target]
o- iqn.2006-04.com.example:444................[1 TPG]
o- tpg1...........................[enabled, auth]
o- acls...............................[0 ACL]
o- luns...............................[0 LUN]
o- portals.........................[0 Portal]

NOTE

As of Red Hat Enterprise Linux 7.1, whenever a target is created, a default portal is also
created.

25.1.4. Configuring an iSCSI Portal


To configure an iSCSI portal, an iSCSI target must first be created and associated with a TPG. For
instructions on how to do this, refer to Section 25.1.3, “Creating an iSCSI Target”.

199
Storage Administration Guide

NOTE

As of Red Hat Enterprise Linux 7.1 when an iSCSI target is created, a default portal is
created as well. This portal is set to listen on all IP addresses with the default port number
(that is, 0.0.0.0:3260). To remove this and add only specified portals, use /iscsi/iqn-
name/tpg1/portals delete ip_address=0.0.0.0 ip_port=3260 then create a
new portal with the required information.

Procedure 25.2. Creating an iSCSI Portal

1. Move into the TPG.

/iscsi> iqn.2006-04.example:444/tpg1/

2. There are two ways to create a portal: create a default portal, or create a portal specifying what
IP address to listen to.

Creating a default portal uses the default iSCSI port 3260 and allows the target to listen on all IP
addresses on that port.

/iscsi/iqn.20...mple:444/tpg1> portals/ create


Using default IP port 3260
Binding to INADDR_Any (0.0.0.0)
Created network portal 0.0.0.0:3260

To create a portal specifying what IP address to listen to, use the following command.

/iscsi/iqn.20...mple:444/tpg1> portals/ create 192.168.122.137


Using default IP port 3260
Created network portal 192.168.122.137:3260

3. Verify that the newly created portal is visible with the ls command.

/iscsi/iqn.20...mple:444/tpg1> ls
o- tpg.................................. [enambled, auth]
o- acls ......................................[0 ACL]
o- luns ......................................[0 LUN]
o- portals ................................[1 Portal]
o- 192.168.122.137:3260......................[OK]

25.1.5. Configuring LUNs


To configure LUNs, first create storage objects. See Section 25.1.2, “Creating a Backstore” for more
information.

Procedure 25.3. Configuring LUNs

1. Create LUNs of already created storage objects.

/iscsi/iqn.20...mple:444/tpg1> luns/ create


/backstores/ramdisk/rd_backend
Created LUN 0.

200
CHAPTER 25. ONLINE STORAGE MANAGEMENT

/iscsi/iqn.20...mple:444/tpg1> luns/ create


/backstores/block/block_backend
Created LUN 1.

/iscsi/iqn.20...mple:444/tpg1> luns/ create /backstores/fileio/file1


Created LUN 2.

2. Show the changes.

/iscsi/iqn.20...mple:444/tpg1> ls
o- tpg.................................. [enambled, auth]
o- acls ......................................[0 ACL]
o- luns .....................................[3 LUNs]
| o- lun0.........................[ramdisk/ramdisk1]
| o- lun1.................[block/block1 (/dev/vdb1)]
| o- lun2...................[fileio/file1 (/foo.img)]
o- portals ................................[1 Portal]
o- 192.168.122.137:3260......................[OK]

NOTE

Be aware that the default LUN name starts at 0, as opposed to 1 as was the case
when using tgtd in Red Hat Enterprise Linux 6.

IMPORTANT

By default, LUNs are created with read-write permissions. In the event that a new LUN is
added after ACLs have been created that LUN will be automatically mapped to all
available ACLs. This can cause a security risk. Use the following procedure to create a
LUN as read-only.

Procedure 25.4. Create a Read-only LUN

1. To create a LUN with read-only permissions, first use the following command:

/> set global auto_add_mapped_luns=false


Parameter auto_add_mapped_luns is now 'false'.

This prevents the auto mapping of LUNs to existing ACLs allowing the manual mapping of LUNs.

2. Next, manually create the LUN with the command


iscsi/target_iqn_name/tpg1/acls/initiator_iqn_name/ create
mapped_lun=next_sequential_LUN_number tpg_lun_or_backstore=backstore
write_protect=1.

/> iscsi/iqn.2015-06.com.redhat:target/tpg1/acls/iqn.2015-
06.com.redhat:initiator/ create mapped_lun=1
tpg_lun_or_backstore=/backstores/block/block2 write_protect=1
Created LUN 1.
Created Mapped LUN 1.
/> ls
o- / ...................................................... [...]
o- backstores ........................................... [...]

201
Storage Administration Guide

<snip>
o- iscsi ......................................... [Targets: 1]
| o- iqn.2015-06.com.redhat:target .................. [TPGs: 1]
| o- tpg1 ............................ [no-gen-acls, no-auth]
| o- acls ....................................... [ACLs: 2]
| | o- iqn.2015-06.com.redhat:initiator .. [Mapped LUNs: 2]
| | | o- mapped_lun0 .............. [lun0 block/disk1 (rw)]
| | | o- mapped_lun1 .............. [lun1 block/disk2 (ro)]
| o- luns ....................................... [LUNs: 2]
| | o- lun0 ...................... [block/disk1 (/dev/vdb)]
| | o- lun1 ...................... [block/disk2 (/dev/vdc)]
<snip>

The mapped_lun1 line now has (ro) at the end (unlike mapped_lun0's (rw)) stating that it is read-
only.

25.1.6. Configuring ACLs


Create an ACL for each initiator that will be connecting. This enforces authentication when that initiator
connects, allowing only LUNs to be exposed to each initiator. Usually each initator has exclusive access
to a LUN. Both targets and initiators have unique identifying names. The initiator's unique name must be
known to configure ACLs. For open-iscsi initiators, this can be found in
/etc/iscsi/initiatorname.iscsi.

Procedure 25.5. Configuring ACLs

1. Move into the acls directory.

/iscsi/iqn.20...mple:444/tpg1> acls/

2. Create an ACL. Either use the initiator name found in /etc/iscsi/initiatorname.iscsi


on the initiator, or if using a name that is easier to remember, refer to Section 25.2, “Creating an
iSCSI Initiator” to ensure ACL matches the initiator. For example:

/iscsi/iqn.20...444/tpg1/acls> create iqn.2006-


04.com.example.foo:888
Created Node ACL for iqn.2006-04.com.example.foo:888
Created mapped LUN 2.
Created mapped LUN 1.
Created mapped LUN 0.

NOTE

The given example's behavior depends on the setting used. In this case, the
global setting auto_add_mapped_luns is used. This automatically maps LUNs
to any created ACL.

You can set user-created ACLs within the TPG node on the target server:

/iscsi/iqn.20...scsi:444/tpg1> set attribute


generate_node_acls=1

3. Show the changes.

202
CHAPTER 25. ONLINE STORAGE MANAGEMENT

/iscsi/iqn.20...444/tpg1/acls> ls
o- acls .................................................[1 ACL]
o- iqn.2006-04.com.example.foo:888 ....[3 Mapped LUNs, auth]
o- mapped_lun0 .............[lun0 ramdisk/ramdisk1 (rw)]
o- mapped_lun1 .................[lun1 block/block1 (rw)]
o- mapped_lun2 .................[lun2 fileio/file1 (rw)]

25.1.7. Configuring Fibre Channel over Ethernet (FCoE) Target


In addition to mounting LUNs over FCoE, as described in Section 25.4, “Configuring a Fibre Channel
over Ethernet Interface”, exporting LUNs to other machines over FCoE is also supported with the aid of
targetcli.

IMPORTANT

Before proceeding, refer to Section 25.4, “Configuring a Fibre Channel over Ethernet
Interface” and verify that basic FCoE setup is completed, and that fcoeadm -i displays
configured FCoE interfaces.

Procedure 25.6. Configure FCoE target

1. Setting up an FCoE target requires the installation of the targetcli package, along with its
dependencies. Refer to Section 25.1, “Target Setup” for more information on targetcli basics
and set up.

2. Create an FCoE target instance on an FCoE interface.

/> tcm_fc/ create 00:11:22:33:44:55:66:77

If FCoE interfaces are present on the system, tab-completing after create will list available
interfaces. If not, ensure fcoeadm -i shows active interfaces.

3. Map a backstore to the target instance.

Example 25.1. Example of Mapping a Backstore to the Target Instance

/> tcm_fc/00:11:22:33:44:55:66:77

/> luns/ create /backstores/fileio/example2

4. Allow access to the LUN from an FCoE initiator.

/> acls/ create 00:99:88:77:66:55:44:33

The LUN should now be accessible to that initiator.

5. To make the changes persistent across reboots, use the saveconfig command and type yes
when prompted. If this is not done the configuration will be lost after rebooting.

6. Exit targetcli by typing exit or entering ctrl+D.

203
Storage Administration Guide

25.1.8. Removing Objects with targetcli

To remove an backstore use the command:

/> /backstores/backstore-type/backstore-name

To remove parts of an iSCSI target, such as an ACL, use the following command:

/> /iscsi/iqn-name/tpg/acls/ delete iqn-name

To remove the entire target, including all ACLs, LUNs, and portals, use the following command:

/> /iscsi delete iqn-name

25.1.9. targetcli References

For more information on targetcli, refer to the following resources:

man targetcli
The targetcli man page. It includes an example walk through.

The Linux SCSI Target Wiki


http://linux-iscsi.org/wiki/Targetcli

Screencast by Andy Grover


https://www.youtube.com/watch?v=BkBGTBadOO8

NOTE

This was uploaded on February 28, 2012. As such, the service name has changed
from targetcli to target.

25.2. CREATING AN ISCSI INITIATOR


After creating a target with targetcli as in Section 25.1, “Target Setup”, use the iscsiadm utility to
set up an initiator.

In Red Hat Enterprise Linux 7, the iSCSI service is lazily started by default: the service starts after
running the iscsiadm command.

Procedure 25.7. Creating an iSCSI Initiator

1. Install iscsi-initiator-utils:

# yum install iscsi-initiator-utils -y

2. If the ACL was given a custom name in Section 25.1.6, “Configuring ACLs”, modify the
/etc/iscsi/initiatorname.iscsi file accordingly. For example:

204
CHAPTER 25. ONLINE STORAGE MANAGEMENT

# cat /etc/iscsi/initiatorname.iscsi
InitiatorName=iqn.2006-04.com.example.node1

# vi /etc/iscsi/initiatorname.iscsi

3. Discover the target:

# iscsiadm -m discovery -t st -p target-ip-address


10.64.24.179:3260,1 iqn.2006-04.com.example:3260

4. Log in to the target with the target IQN you discovered in step 3:

# iscsiadm -m node -T iqn.2006-04.com.example:3260 -l


Logging in to [iface: default, target: iqn.2006-04.com.example:3260,
portal: 10.64.24.179,3260] (multiple)
Login to [iface: default, target: iqn.2006-04.com.example:3260,
portal: 10.64.24.179,3260] successful.

This procedure can be followed for any number of initators connected to the same LUN so long
as their specific initiator names are added to the ACL as described in Section 25.1.6,
“Configuring ACLs”.

5. Find the iSCSI disk name and create a file system on this iSCSI dick:

# grep "Attached SCSI" /var/log/messages

# mkfs.ext4 /dev/disk_name

Replace disk_name with the iSCSI disk name displayed in /var/log/messages.

6. Mount the file system:

# mkdir /mount/point
# mount /dev/disk_name /mount/point

Replace /mount/point with the mount point of the partition.

7. Edit the /etc/fstab to mount the file system automatically when the system boots:

# vim /etc/fstab
/dev/disk_name /mount/point ext4 _netdev 0 0

Replace disk_name with the iSCSI disk name.

8. Log off from the target:

# iscsiadm -m node -T iqn.2006-04.com.example:3260 -u

25.3. FIBRE CHANNEL


This section discusses the Fibre Channel API, native Red Hat Enterprise Linux 7 Fibre Channel drivers,
and the Fibre Channel capabilities of these drivers.

205
Storage Administration Guide

25.3.1. Fibre Channel API


Following is a list of /sys/class/ directories that contain files used to provide the userspace API. In
each item, host numbers are designated by H, bus numbers are B, targets are T, logical unit numbers
(LUNs) are L, and remote port numbers are R.

IMPORTANT

If your system is using multipath software, Red Hat recommends that you consult your
hardware vendor before changing any of the values described in this section.

Transport: /sys/class/fc_transport/targetH:B:T/

port_id — 24-bit port ID/address

node_name — 64-bit node name

port_name — 64-bit port name

Remote Port: /sys/class/fc_remote_ports/rport-H:B-R/

port_id

node_name

port_name

dev_loss_tmo: controls when the scsi device gets removed from the system. After
dev_loss_tmo triggers, the scsi device is removed.

In multipath.conf, you can set dev_loss_tmo to infinity, which sets its value to
2,147,483,647 seconds, or 68 years, and is the maximum dev_loss_tmo value.

In Red Hat Enterprise Linux 7, if you do not set the fast_io_fail_tmo option,
dev_loss_tmo is capped to 600 seconds. By default, fast_io_fail_tmo is set to 5
seconds in Red Hat Enterprise Linux 7 if the multipathd service is running; otherwise, it is
set to off.

fast_io_fail_tmo: specifies the number of seconds to wait before it marks a link as


"bad". Once a link is marked bad, existing running I/O or any new I/O on its corresponding
path fails.

If I/O is in a blocked queue, it will not be failed until dev_loss_tmo expires and the queue is
unblocked.

If fast_io_fail_tmo is set to any value except off, dev_loss_tmo is uncapped. If


fast_io_fail_tmo is set to off, no I/O fails until the device is removed from the system.
If fast_io_fail_tmo is set to a number, I/O fails immediately when the
fast_io_fail_tmo timeout triggers.

Host: /sys/class/fc_host/hostH/

port_id

206
CHAPTER 25. ONLINE STORAGE MANAGEMENT

issue_lip: instructs the driver to rediscover remote ports.

25.3.2. Native Fibre Channel Drivers and Capabilities


Red Hat Enterprise Linux 7 ships with the following native Fibre Channel drivers:

lpfc

qla2xxx

zfcp

bfa

IMPORTANT

The qla2xxx driver runs in initiator mode by default. To use qla2xxx with Linux-IO, enable
Fibre Channel target mode with the corresponding qlini_mode module parameter.

First, make sure that the firmware package for your qla device, such as ql2200-firmware
or similar, is installed.

To enable target mode, add the following parameter to the


/usr/lib/modprobe.d/qla2xxx.conf qla2xxx module configuration file:

options qla2xxx qlini_mode=disabled

Then, use the dracut -f command to rebuild the initial ramdisk (initrd), and reboot
the system for the changes to take effect.

Table 25.1, “Fibre Channel API Capabilities” describes the different Fibre Channel API capabilities of
each native Red Hat Enterprise Linux 7 driver. X denotes support for the capability.

Table 25.1. Fibre Channel API Capabilities

lpfc qla2xxx zfcp bfa

Transport X X X X
port_id

Transport X X X X
node_name

Transport X X X X
port_name

Remote Port X X X X
dev_loss_tmo

207
Storage Administration Guide

lpfc qla2xxx zfcp bfa

Remote Port X X [a] X [b] X


fast_io_fail
_tmo

Host port_id X X X X

Host issue_lip X X X

[a] Supported as of Red Hat Enterprise Linux 5.4

[b] Supported as of Red Hat Enterprise Linux 6.0

25.4. CONFIGURING A FIBRE CHANNEL OVER ETHERNET INTERFACE


Setting up and deploying a Fibre Channel over Ethernet (FCoE) interface requires two packages:

fcoe-utils

lldpad

Once these packages are installed, perform the following procedure to enable FCoE over a virtual LAN
(VLAN):

Procedure 25.8. Configuring an Ethernet Interface to Use FCoE

1. To configure a new VLAN, make a copy of an existing network script, for example
/etc/fcoe/cfg-eth0, and change the name to the Ethernet device that supports FCoE. This
provides you with a default file to configure. Given that the FCoE device is ethX, run:

# cp /etc/fcoe/cfg-ethx /etc/fcoe/cfg-ethX

Modify the contents of cfg-ethX as needed. Notably, set DCB_REQUIRED to no for networking
interfaces that implement a hardware Data Center Bridging Exchange (DCBX) protocol client.

2. If you want the device to automatically load during boot time, set ONBOOT=yes in the
corresponding /etc/sysconfig/network-scripts/ifcfg-ethX file. For example, if the
FCoE device is eth2, edit /etc/sysconfig/network-scripts/ifcfg-eth2 accordingly.

3. Start the data center bridging daemon (dcbd) by running:

# systemctl start lldpad

4. For networking interfaces that implement a hardware DCBX client, skip this step.

For interfaces that require a software DCBX client, enable data center bridging on the Ethernet
interface by running:

# dcbtool sc ethX dcb on

208
CHAPTER 25. ONLINE STORAGE MANAGEMENT

Then, enable FCoE on the Ethernet interface by running:

# dcbtool sc ethX app:fcoe e:1

Note that these commands only work if the dcbd settings for the Ethernet interface were not
changed.

5. Load the FCoE device now using:

# ip link set dev ethX up

6. Start FCoE using:

# systemctl start fcoe

The FCoE device appears soon if all other settings on the fabric are correct. To view configured
FCoE devices, run:

# fcoeadm -i

After correctly configuring the Ethernet interface to use FCoE, Red Hat recommends that you set FCoE
and the lldpad service to run at startup. To do so, use the systemctl utility:

# systemctl enable lldpad

# systemctl enable fcoe

NOTE

Running the # systemctl stop fcoe command stops the daemon, but does not reset
the configuration of FCoE interfaces. To do so, run the # systemctl -s SIGHUP
kill fcoe command.

As of Red Hat Enterprise Linux 7, Network Manager has the ability to query and set the DCB settings of
a DCB capable Ethernet interface.

25.5. CONFIGURING AN FCOE INTERFACE TO AUTOMATICALLY


MOUNT AT BOOT

NOTE

The instructions in this section are available in /usr/share/doc/fcoe-


utils-version/README as of Red Hat Enterprise Linux 6.1. Refer to that document for
any possible changes throughout minor releases.

You can mount newly discovered disks via udev rules, autofs, and other similar methods. Sometimes,
however, a specific service might require the FCoE disk to be mounted at boot-time. In such cases, the
FCoE disk should be mounted as soon as the fcoe service runs and before the initiation of any service
that requires the FCoE disk.

209
Storage Administration Guide

To configure an FCoE disk to automatically mount at boot, add proper FCoE mounting code to the
startup script for the fcoe service. The fcoe startup script is
/lib/systemd/system/fcoe.service.

The FCoE mounting code is different per system configuration, whether you are using a simple formatted
FCoE disk, LVM, or multipathed device node.

Example 25.2. FCoE Mounting Code

The following is a sample FCoE mounting code for mounting file systems specified via wild cards in
/etc/fstab:

mount_fcoe_disks_from_fstab()
{
local timeout=20
local done=1
local fcoe_disks=($(egrep 'by-path\/fc-.*_netdev' /etc/fstab | cut
-d ' ' -f1))

test -z $fcoe_disks && return 0

echo -n "Waiting for fcoe disks . "


while [ $timeout -gt 0 ]; do
for disk in ${fcoe_disks[*]}; do
if ! test -b $disk; then
done=0
break
fi
done

test $done -eq 1 && break;


sleep 1
echo -n ". "
done=1
let timeout--
done

if test $timeout -eq 0; then


echo "timeout!"
else
echo "done!"
fi

# mount any newly discovered disk


mount -a 2>/dev/null
}

The mount_fcoe_disks_from_fstab function should be invoked after the fcoe service script starts
the fcoemon daemon. This will mount FCoE disks specified by the following paths in /etc/fstab:

/dev/disk/by-path/fc-0xXX:0xXX /mnt/fcoe-disk1 ext3 defaults,_netdev 0


0
/dev/disk/by-path/fc-0xYY:0xYY /mnt/fcoe-disk2 ext3 defaults,_netdev 0
0

210
CHAPTER 25. ONLINE STORAGE MANAGEMENT

Entries with fc- and _netdev sub-strings enable the mount_fcoe_disks_from_fstab function to
identify FCoE disk mount entries. For more information on /etc/fstab entries, refer to man 5 fstab.

NOTE

The fcoe service does not implement a timeout for FCoE disk discovery. As such, the
FCoE mounting code should implement its own timeout period.

25.6. ISCSI
This section describes the iSCSI API and the iscsiadm utility. Before using the iscsiadm utility, install
the iscsi-initiator-utils package first by running yum install iscsi-initiator-utils.

In Red Hat Enterprise Linux 7, the iSCSI service is lazily started by default. If root is not on an iSCSI
device or there are no nodes marked with node.startup = automatic then the iSCSI service will
not start until an iscsiadm command is run that requires iscsid or the iscsi kernel modules to be started.
For example, running the discovery command iscsiadm -m discovery -t st -p ip:port will
cause iscsiadmin to start the iSCSI service.

To force the iscsid daemon to run and iSCSI kernel modules to load, run systemctl start
iscsid.service.

25.6.1. iSCSI API


To get information about running sessions, run:

# iscsiadm -m session -P 3

This command displays the session/device state, session ID (sid), some negotiated parameters, and the
SCSI devices accessible through the session.

For shorter output (for example, to display only the sid-to-node mapping), run:

# iscsiadm -m session -P 0

or

# iscsiadm -m session

These commands print the list of running sessions with the format:

driver [sid] target_ip:port,target_portal_group_tag proper_target_name

Example 25.3. Output of the iscsisadm -m session Command

For example:

# iscsiadm -m session

tcp [2] 10.15.84.19:3260,2 iqn.1992-08.com.netapp:sn.33615311


tcp [3] 10.15.85.19:3260,3 iqn.1992-08.com.netapp:sn.33615311

211
Storage Administration Guide

For more information about the iSCSI API, refer to /usr/share/doc/iscsi-initiator-


utils-version/README.

25.7. PERSISTENT NAMING


Red Hat Enterprise Linux provides a number of ways to identify storage devices. It is important to use the
correct option to identify each device when used in order to avoid inadvertently accessing the wrong
device, particularly when installing to or reformatting drives.

25.7.1. Major and Minor Numbers of Storage Devices


Storage devices managed by the sd driver are identified internally by a collection of major device
numbers and their associated minor numbers. The major device numbers used for this purpose are not
in a contiguous range. Each storage device is represented by a major number and a range of minor
numbers, which are used to identify either the entire device or a partition within the device. There is a
direct association between the major and minor numbers allocated to a device and numbers in the form
of sd<letter(s)>[number(s)]. Whenever the sd driver detects a new device, an available major
number and minor number range is allocated. Whenever a device is removed from the operating system,
the major number and minor number range is freed for later reuse.

The major and minor number range and associated sd names are allocated for each device when it is
detected. This means that the association between the major and minor number range and associated
sd names can change if the order of device detection changes. Although this is unusual with some
hardware configurations (for example, with an internal SCSI controller and disks that have their SCSI
target ID assigned by their physical location within a chassis), it can nevertheless occur. Examples of
situations where this can happen are as follows:

A disk may fail to power up or respond to the SCSI controller. This will result in it not being
detected by the normal device probe. The disk will not be accessible to the system and
subsequent devices will have their major and minor number range, including the associated sd
names shifted down. For example, if a disk normally referred to as sdb is not detected, a disk
that is normally referred to as sdc would instead appear as sdb.

A SCSI controller (host bus adapter, or HBA) may fail to initialize, causing all disks connected to
that HBA to not be detected. Any disks connected to subsequently probed HBAs would be
assigned different major and minor number ranges, and different associated sd names.

The order of driver initialization could change if different types of HBAs are present in the
system. This would cause the disks connected to those HBAs to be detected in a different order.
This can also occur if HBAs are moved to different PCI slots on the system.

Disks connected to the system with Fibre Channel, iSCSI, or FCoE adapters might be
inaccessible at the time the storage devices are probed, due to a storage array or intervening
switch being powered off, for example. This could occur when a system reboots after a power
failure, if the storage array takes longer to come online than the system take to boot. Although
some Fibre Channel drivers support a mechanism to specify a persistent SCSI target ID to
WWPN mapping, this will not cause the major and minor number ranges, and the associated sd
names to be reserved, it will only provide consistent SCSI target ID numbers.

These reasons make it undesirable to use the major and minor number range or the associated sd
names when referring to devices, such as in the /etc/fstab file. There is the possibility that the wrong
device will be mounted and data corruption could result.

212
CHAPTER 25. ONLINE STORAGE MANAGEMENT

Occasionally, however, it is still necessary to refer to the sd names even when another mechanism is
used (such as when errors are reported by a device). This is because the Linux kernel uses sd names
(and also SCSI host/channel/target/LUN tuples) in kernel messages regarding the device.

25.7.2. World Wide Identifier (WWID)


The World Wide Identifier (WWID) can be used in reliably identifying devices. It is a persistent, system-
independent ID that the SCSI Standard requires from all SCSI devices. The WWID identifier is
guaranteed to be unique for every storage device, and independent of the path that is used to access the
device.

This identifier can be obtained by issuing a SCSI Inquiry to retrieve the Device Identification Vital
Product Data (page 0x83) or Unit Serial Number (page 0x80). The mappings from these WWIDs to the
current /dev/sd names can be seen in the symlinks maintained in the /dev/disk/by-id/ directory.

Example 25.4. WWID

For example, a device with a page 0x83 identifier would have:

scsi-3600508b400105e210000900000490000 -> ../../sda

Or, a device with a page 0x80 identifier would have:

scsi-SSEAGATE_ST373453LW_3HW1RHM6 -> ../../sda

Red Hat Enterprise Linux automatically maintains the proper mapping from the WWID-based device
name to a current /dev/sd name on that system. Applications can use the /dev/disk/by-id/ name
to reference the data on the disk, even if the path to the device changes, and even when accessing the
device from different systems.

If there are multiple paths from a system to a device, DM Multipath uses the WWID to detect this. DM
Multipath then presents a single "pseudo-device" in the /dev/mapper/wwid directory, such as
/dev/mapper/3600508b400105df70000e00000ac0000.

The command multipath -l shows the mapping to the non-persistent identifiers:


Host:Channel:Target:LUN, /dev/sd name, and the major:minor number.

3600508b400105df70000e00000ac0000 dm-2 vendor,product


[size=20G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=0][active]
\_ 5:0:1:1 sdc 8:32 [active][undef]
\_ 6:0:1:1 sdg 8:96 [active][undef]
\_ round-robin 0 [prio=0][enabled]
\_ 5:0:0:1 sdb 8:16 [active][undef]
\_ 6:0:0:1 sdf 8:80 [active][undef]

DM Multipath automatically maintains the proper mapping of each WWID-based device name to its
corresponding /dev/sd name on the system. These names are persistent across path changes, and
they are consistent when accessing the device from different systems.

213
Storage Administration Guide

When the user_friendly_names feature (of DM Multipath) is used, the WWID is mapped to a name
of the form /dev/mapper/mpathn. By default, this mapping is maintained in the file
/etc/multipath/bindings. These mpathn names are persistent as long as that file is maintained.

IMPORTANT

If you use user_friendly_names, then additional steps are required to obtain


consistent names in a cluster. Refer to the Consistent Multipath Device Names in a
Cluster section in the DM Multipath book.

In addition to these persistent names provided by the system, you can also use udev rules to implement
persistent names of your own, mapped to the WWID of the storage.

25.7.3. Device Names Managed by the udev Mechanism in /dev/disk/by-*

The udev mechanism consists of three major components:

The kernel
Generates events that are sent to user space when devices are added, removed, or changed.

The udevd service


Receives the events.

The udev rules


Specifies the action to take when the udev service receives the kernel events.

This mechanism is used for all types of devices in Linux, not just for storage devices. In the case of
storage devices, Red Hat Enterprise Linux contains udev rules that create symbolic links in the
/dev/disk/ directory allowing storage devices to be referred to by their contents, a unique identifier,
their serial number, or the hardware path used to access the device.

/dev/disk/by-label/
Entries in this directory provide a symbolic name that refers to the storage device by a label in the
contents (that is, the data) stored on the device. The blkid utility is used to read data from the device
and determine a name (that is, a label) for the device. For example:

/dev/disk/by-label/Boot

NOTE

The information is obtained from the contents (that is, the data) on the device so if the
contents are copied to another device, the label will remain the same.

The label can also be used to refer to the device in /etc/fstab using the following syntax:

LABEL=Boot

/dev/disk/by-uuid/

214
CHAPTER 25. ONLINE STORAGE MANAGEMENT

Entries in this directory provide a symbolic name that refers to the storage device by a unique
identifier in the contents (that is, the data) stored on the device. The blkid utility is used to read data
from the device and obtain a unique identifier (that is, the UUID) for the device. For example:

UUID=3e6be9de-8139-11d1-9106-a43f08d823a6

/dev/disk/by-id/
Entries in this directory provide a symbolic name that refers to the storage device by a unique
identifier (different from all other storage devices). The identifier is a property of the device but is not
stored in the contents (that is, the data) on the devices. For example:

/dev/disk/by-id/scsi-3600508e000000000ce506dc50ab0ad05

/dev/disk/by-id/wwn-0x600508e000000000ce506dc50ab0ad05

The id is obtained from the world-wide ID of the device, or the device serial number. The
/dev/disk/by-id/ entries may also include a partition number. For example:

/dev/disk/by-id/scsi-3600508e000000000ce506dc50ab0ad05-part1

/dev/disk/by-id/wwn-0x600508e000000000ce506dc50ab0ad05-part1

/dev/disk/by-path/
Entries in this directory provide a symbolic name that refers to the storage device by the hardware
path used to access the device, beginning with a reference to the storage controller in the PCI
hierarchy, and including the SCSI host, channel, target, and LUN numbers and, optionally, the
partition number. Although these names are preferable to using major and minor numbers or sd
names, caution must be used to ensure that the target numbers do not change in a Fibre Channel
SAN environment (for example, through the use of persistent binding) and that the use of the names
is updated if a host adapter is moved to a different PCI slot. In addition, there is the possibility that the
SCSI host numbers could change if a HBA fails to probe, if drivers are loaded in a different order, or
if a new HBA is installed on the system. An example of by-path listing is:

/dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:0

The /dev/disk/by-path/ entries may also include a partition number, such as:

/dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:0-part1

25.7.3.1. Limitations of the udev Device Naming Convention

The following are some limitations of the udev naming convention.

It is possible that the device may not be accessible at the time the query is performed because
the udev mechanism may rely on the ability to query the storage device when the udev rules
are processed for a udev event. This is more likely to occur with Fibre Channel, iSCSI or FCoE
storage devices when the device is not located in the server chassis.

The kernel may also send udev events at any time, causing the rules to be processed and
possibly causing the /dev/disk/by-*/ links to be removed if the device is not accessible.

215
Storage Administration Guide

There can be a delay between when the udev event is generated and when it is processed,
such as when a large number of devices are detected and the user-space udevd service takes
some amount of time to process the rules for each one). This could cause a delay between when
the kernel detects the device and when the /dev/disk/by-*/ names are available.

External programs such as blkid invoked by the rules may open the device for a brief period of
time, making the device inaccessible for other uses.

25.7.3.2. Modifying Persistent Naming Attributes

Although udev naming attributes are persistent, in that they do not change on their own across system
reboots, some are also configurable. You can set custom values for the following persistent naming
attributes:

UUID: file system UUID

LABEL: file system label

Because the UUID and LABEL attributes are related to the file system, the tool you need to use depends
on the file system on that partition.

To change the UUID or LABEL attributes of an XFS file system, unmount the file system and then
use the xfs_admin utility to change the attribute:

# umount /dev/device
# xfs_admin [-U new_uuid] [-L new_label] /dev/device
# udevadm settle

To change the UUID or LABEL attributes of an ext4, ext3, or ext2 file system, use the tune2fs
utility:

# tune2fs [-U new_uuid] [-L new_label] /dev/device


# udevadm settle

Replace new_uuid with the UUID you want to set; for example, 1cdfbc07-1c90-4984-b5ec-
f61943f5ea50. Replace new_label with a label; for example, backup_data.

NOTE

Changing udev attributes happens in the background and might take a long time. The
udevadm settle command waits until the change is fully registered, which ensures that
your next command will be able to utilize the new attribute correctly.

You should also use the command after creating new devices; for example, after using the
parted tool to create a partition with a custom PARTUUID or PARTLABEL attribute, or after
creating a new file system.

25.8. REMOVING A STORAGE DEVICE


Before removing access to the storage device itself, it is advisable to back up data from the device first.
Afterwards, flush I/O and remove all operating system references to the device (as described below). If
the device uses multipathing, then do this for the multipath "pseudo device" (Section 25.7.2, “World Wide

216
CHAPTER 25. ONLINE STORAGE MANAGEMENT

Identifier (WWID)”) and each of the identifiers that represent a path to the device. If you are only
removing a path to a multipath device, and other paths will remain, then the procedure is simpler, as
described in Section 25.10, “Adding a Storage Device or Path”.

Removal of a storage device is not recommended when the system is under memory pressure, since the
I/O flush will add to the load. To determine the level of memory pressure, run the command vmstat 1
100; device removal is not recommended if:

Free memory is less than 5% of the total memory in more than 10 samples per 100 (the
command free can also be used to display the total memory).

Swapping is active (non-zero si and so columns in the vmstat output).

The general procedure for removing all access to a device is as follows:

Procedure 25.9. Ensuring a Clean Device Removal

1. Close all users of the device and backup device data as needed.

2. Use umount to unmount any file systems that mounted the device.

3. Remove the device from any md and LVM volume using it. If the device is a member of an LVM
Volume group, then it may be necessary to move data off the device using the pvmove
command, then use the vgreduce command to remove the physical volume, and (optionally)
pvremove to remove the LVM metadata from the disk.

4. If the device uses multipathing, run multipath -l and note all the paths to the device.
Afterwards, remove the multipathed device using multipath -f device.

5. Run blockdev --flushbufs device to flush any outstanding I/O to all paths to the device.
This is particularly important for raw devices, where there is no umount or vgreduce operation
to cause an I/O flush.

6. Remove any reference to the device's path-based name, like /dev/sd, /dev/disk/by-path
or the major:minor number, in applications, scripts, or utilities on the system. This is important
in ensuring that different devices added in the future will not be mistaken for the current device.

7. Finally, remove each path to the device from the SCSI subsystem. To do so, use the command
echo 1 > /sys/block/device-name/device/delete where device-name may be sde,
for example.

Another variation of this operation is echo 1 >


/sys/class/scsi_device/h:c:t:l/device/delete, where h is the HBA number, c is
the channel on the HBA, t is the SCSI target ID, and l is the LUN.

NOTE

The older form of these commands, echo "scsi remove-single-device 0


0 0 0" > /proc/scsi/scsi, is deprecated.

You can determine the device-name, HBA number, HBA channel, SCSI target ID and LUN for a device
from various commands, such as lsscsi, scsi_id, multipath -l, and ls -l /dev/disk/by-*.

217
Storage Administration Guide

After performing Procedure 25.9, “Ensuring a Clean Device Removal”, a device can be physically
removed safely from a running system. It is not necessary to stop I/O to other devices while doing so.

Other procedures, such as the physical removal of the device, followed by a rescan of the SCSI bus (as
described in Section 25.11, “Scanning Storage Interconnects”) to cause the operating system state to be
updated to reflect the change, are not recommended. This will cause delays due to I/O timeouts, and
devices may be removed unexpectedly. If it is necessary to perform a rescan of an interconnect, it must
be done while I/O is paused, as described in Section 25.11, “Scanning Storage Interconnects”.

25.9. REMOVING A PATH TO A STORAGE DEVICE


If you are removing a path to a device that uses multipathing (without affecting other paths to the device),
then the general procedure is as follows:

Procedure 25.10. Removing a Path to a Storage Device

1. Remove any reference to the device's path-based name, like /dev/sd or /dev/disk/by-
path or the major:minor number, in applications, scripts, or utilities on the system. This is
important in ensuring that different devices added in the future will not be mistaken for the
current device.

2. Take the path offline using echo offline > /sys/block/sda/device/state.

This will cause any subsequent I/O sent to the device on this path to be failed immediately.
Device-mapper-multipath will continue to use the remaining paths to the device.

3. Remove the path from the SCSI subsystem. To do so, use the command echo 1 >
/sys/block/device-name/device/delete where device-name may be sde, for
example (as described in Procedure 25.9, “Ensuring a Clean Device Removal”).

After performing Procedure 25.10, “Removing a Path to a Storage Device”, the path can be safely
removed from the running system. It is not necessary to stop I/O while this is done, as device-mapper-
multipath will re-route I/O to remaining paths according to the configured path grouping and failover
policies.

Other procedures, such as the physical removal of the cable, followed by a rescan of the SCSI bus to
cause the operating system state to be updated to reflect the change, are not recommended. This will
cause delays due to I/O timeouts, and devices may be removed unexpectedly. If it is necessary to
perform a rescan of an interconnect, it must be done while I/O is paused, as described in Section 25.11,
“Scanning Storage Interconnects”.

25.10. ADDING A STORAGE DEVICE OR PATH


When adding a device, be aware that the path-based device name (/dev/sd name, major:minor
number, and /dev/disk/by-path name, for example) the system assigns to the new device may have
been previously in use by a device that has since been removed. As such, ensure that all old references
to the path-based device name have been removed. Otherwise, the new device may be mistaken for the
old device.

Procedure 25.11. Add a Storage Device or Path

1. The first step in adding a storage device or path is to physically enable access to the new
storage device, or a new path to an existing device. This is done using vendor-specific
commands at the Fibre Channel or iSCSI storage server. When doing so, note the LUN value for
the new storage that will be presented to your host. If the storage server is Fibre Channel, also

218
CHAPTER 25. ONLINE STORAGE MANAGEMENT

take note of the World Wide Node Name (WWNN) of the storage server, and determine whether
there is a single WWNN for all ports on the storage server. If this is not the case, note the World
Wide Port Name (WWPN) for each port that will be used to access the new LUN.

2. Next, make the operating system aware of the new storage device, or path to an existing device.
The recommended command to use is:

$ echo "c t l" > /sys/class/scsi_host/hosth/scan

In the previous command, h is the HBA number, c is the channel on the HBA, t is the SCSI
target ID, and l is the LUN.

NOTE

The older form of this command, echo "scsi add-single-device 0 0 0


0" > /proc/scsi/scsi, is deprecated.

a. In some Fibre Channel hardware, a newly created LUN on the RAID array may not be visible
to the operating system until a Loop Initialization Protocol (LIP) operation is performed. Refer
to Section 25.11, “Scanning Storage Interconnects” for instructions on how to do this.

IMPORTANT

It will be necessary to stop I/O while this operation is executed if an LIP is


required.

b. If a new LUN has been added on the RAID array but is still not being configured by the
operating system, confirm the list of LUNs being exported by the array using the sg_luns
command, part of the sg3_utils package. This will issue the SCSI REPORT LUNS command
to the RAID array and return a list of LUNs that are present.

For Fibre Channel storage servers that implement a single WWNN for all ports, you can
determine the correct h,c,and t values (i.e. HBA number, HBA channel, and SCSI target ID) by
searching for the WWNN in sysfs.

Example 25.5. Determine Correct h, c, and t Values

For example, if the WWNN of the storage server is 0x5006016090203181, use:

$ grep 5006016090203181 /sys/class/fc_transport/*/node_name

This should display output similar to the following:

/sys/class/fc_transport/target5:0:2/node_name:0x5006016090203181
/sys/class/fc_transport/target5:0:3/node_name:0x5006016090203181
/sys/class/fc_transport/target6:0:2/node_name:0x5006016090203181
/sys/class/fc_transport/target6:0:3/node_name:0x5006016090203181

This indicates there are four Fibre Channel routes to this target (two single-channel HBAs,
each leading to two storage ports). Assuming a LUN value is 56, then the following command
will configure the first path:

219
Storage Administration Guide

$ echo "0 2 56" > /sys/class/scsi_host/host5/scan

This must be done for each path to the new device.

For Fibre Channel storage servers that do not implement a single WWNN for all ports, you can
determine the correct HBA number, HBA channel, and SCSI target ID by searching for each of
the WWPNs in sysfs.

Another way to determine the HBA number, HBA channel, and SCSI target ID is to refer to
another device that is already configured on the same path as the new device. This can be done
with various commands, such as lsscsi, scsi_id, multipath -l, and ls -l
/dev/disk/by-*. This information, plus the LUN number of the new device, can be used as
shown above to probe and configure that path to the new device.

3. After adding all the SCSI paths to the device, execute the multipath command, and check to
see that the device has been properly configured. At this point, the device can be added to md,
LVM, mkfs, or mount, for example.

If the steps above are followed, then a device can safely be added to a running system. It is not
necessary to stop I/O to other devices while this is done. Other procedures involving a rescan (or a
reset) of the SCSI bus, which cause the operating system to update its state to reflect the current device
connectivity, are not recommended while storage I/O is in progress.

25.11. SCANNING STORAGE INTERCONNECTS


Certain commands allow you to reset, scan, or both reset and scan one or more interconnects, which
potentially adds and removes multiple devices in one operation. This type of scan can be disruptive, as it
can cause delays while I/O operations time out, and remove devices unexpectedly. Red Hat
recommends using interconnect scanning only when necessary. Observe the following restrictions when
scanning storage interconnects:

All I/O on the effected interconnects must be paused and flushed before executing the
procedure, and the results of the scan checked before I/O is resumed.

As with removing a device, interconnect scanning is not recommended when the system is
under memory pressure. To determine the level of memory pressure, run the vmstat 1 100
command. Interconnect scanning is not recommended if free memory is less than 5% of the total
memory in more than 10 samples per 100. Also, interconnect scanning is not recommended if
swapping is active (non-zero si and so columns in the vmstat output). The free command
can also display the total memory.

The following commands can be used to scan storage interconnects:

echo "1" > /sys/class/fc_host/host/issue_lip


This operation performs a Loop Initialization Protocol (LIP), scans the interconnect, and causes the
SCSI layer to be updated to reflect the devices currently on the bus. Essentially, an LIP is a bus
reset, and causes device addition and removal. This procedure is necessary to configure a new SCSI
target on a Fibre Channel interconnect.

Note that issue_lip is an asynchronous operation. The command can complete before the entire
scan has completed. You must monitor /var/log/messages to determine when issue_lip
finishes.

220
CHAPTER 25. ONLINE STORAGE MANAGEMENT

The lpfc, qla2xxx, and bnx2fc drivers support issue_lip. For more information about the API
capabilities supported by each driver in Red Hat Enterprise Linux, see Table 25.1, “Fibre Channel
API Capabilities”.

/usr/bin/rescan-scsi-bus.sh
The /usr/bin/rescan-scsi-bus.sh script was introduced in Red Hat Enterprise Linux 5.4. By
default, this script scans all the SCSI buses on the system, and updates the SCSI layer to reflect new
devices on the bus. The script provides additional options to allow device removal, and the issuing of
LIPs. For more information about this script, including known issues, see Section 25.17,
“Adding/Removing a Logical Unit Through rescan-scsi-bus.sh”.

echo "- - -" > /sys/class/scsi_host/hosth/scan


This is the same command as described in Section 25.10, “Adding a Storage Device or Path” to add
a storage device or path. In this case, however, the channel number, SCSI target ID, and LUN values
are replaced by wildcards. Any combination of identifiers and wildcards is allowed, so you can make
the command as specific or broad as needed. This procedure adds LUNs, but does not remove
them.

modprobe --remove driver-name, modprobe driver-name


Running the modprobe --remove driver-name command followed by the modprobe driver-
name command completely re-initializes the state of all interconnects controlled by the driver. Despite
being rather extreme, using the described commands can be appropriate in certain situations. The
commands can be used, for example, to restart the driver with a different module parameter value.

25.12. ISCSI DISCOVERY CONFIGURATION


The default iSCSI configuration file is /etc/iscsi/iscsid.conf. This file contains iSCSI settings
used by iscsid and iscsiadm.

During target discovery, the iscsiadm tool uses the settings in /etc/iscsi/iscsid.conf to create
two types of records:

Node records in /var/lib/iscsi/nodes


When logging into a target, iscsiadm uses the settings in this file.

Discovery records in /var/lib/iscsi/discovery_type


When performing discovery to the same destination, iscsiadm uses the settings in this file.

Before using different settings for discovery, delete the current discovery records (i.e.
/var/lib/iscsi/discovery_type) first. To do this, use the following command: [5]

# iscsiadm -m discovery -t discovery_type -p target_IP:port -o delete

Here, discovery_type can be either sendtargets, isns, or fw.

For details on different types of discovery, refer to the DISCOVERY TYPES section of the iscsiadm(8)
man page.

There are two ways to reconfigure discovery record settings:

221
Storage Administration Guide

Edit the /etc/iscsi/iscsid.conf file directly prior to performing a discovery. Discovery


settings use the prefix discovery; to view them, run:

# iscsiadm -m discovery -t discovery_type -p target_IP:port

Alternatively, iscsiadm can also be used to directly change discovery record settings, as in:

# iscsiadm -m discovery -t discovery_type -p target_IP:port -o


update -n setting -v %value

Refer to the iscsiadm(8) man page for more information on available setting options and valid
value options for each.

After configuring discovery settings, any subsequent attempts to discover new targets will use the new
settings. Refer to Section 25.14, “Scanning iSCSI Interconnects” for details on how to scan for new
iSCSI targets.

For more information on configuring iSCSI target discovery, refer to the man pages of iscsiadm and
iscsid. The /etc/iscsi/iscsid.conf file also contains examples on proper configuration syntax.

25.13. CONFIGURING ISCSI OFFLOAD AND INTERFACE BINDING


This chapter describes how to set up iSCSI interfaces in order to bind a session to a NIC port when
using software iSCSI. It also describes how to set up interfaces for use with network devices that support
offloading.

The network subsystem can be configured to determine the path/NIC that iSCSI interfaces should use
for binding. For example, if portals and NICs are set up on different subnets, then it is not necessary to
manually configure iSCSI interfaces for binding.

Before attempting to configure an iSCSI interface for binding, run the following command first:

$ ping -I ethX target_IP

If ping fails, then you will not be able to bind a session to a NIC. If this is the case, check the network
settings first.

25.13.1. Viewing Available iface Configurations


iSCSI offload and interface binding is supported for the following iSCSI initiator implementations:

Software iSCSI
This stack allocates an iSCSI host instance (that is, scsi_host) per session, with a single
connection per session. As a result, /sys/class_scsi_host and /proc/scsi will report a
scsi_host for each connection/session you are logged into.

Offload iSCSI
This stack allocates a scsi_host for each PCI device. As such, each port on a host bus adapter will
show up as a different PCI device, with a different scsi_host per HBA port.

222
CHAPTER 25. ONLINE STORAGE MANAGEMENT

To manage both types of initiator implementations, iscsiadm uses the iface structure. With this
structure, an iface configuration must be entered in /var/lib/iscsi/ifaces for each HBA port,
software iSCSI, or network device (ethX) used to bind sessions.

To view available iface configurations, run iscsiadm -m iface. This will display iface information
in the following format:

iface_name
transport_name,hardware_address,ip_address,net_ifacename,initiator_name

Refer to the following table for an explanation of each value/setting.

Table 25.2. iface Settings

Setting Description

iface_name iface configuration name.

transport_name Name of driver

hardware_address MAC address

ip_address IP address to use for this port

net_iface_name Name used for the vlan or alias binding of a


software iSCSI session. For iSCSI offloads,
net_iface_name will be <empty> because this
value is not persistent across reboots.

initiator_name This setting is used to override a default name for the


initiator, which is defined in
/etc/iscsi/initiatorname.iscsi

Example 25.6. Sample Output of the iscsiadm -m iface Command

The following is a sample output of the iscsiadm -m iface command:

iface0 qla4xxx,00:c0:dd:08:63:e8,20.15.0.7,default,iqn.2005-
06.com.redhat:madmax
iface1 qla4xxx,00:c0:dd:08:63:ea,20.15.0.9,default,iqn.2005-
06.com.redhat:madmax

For software iSCSI, each iface configuration must have a unique name (with less than 65 characters).
The iface_name for network devices that support offloading appears in the format
transport_name.hardware_name.

Example 25.7. iscsiadm -m iface Output with a Chelsio Network Card

For example, the sample output of iscsiadm -m iface on a system using a Chelsio network card
might appear as:

223
Storage Administration Guide

default tcp,<empty>,<empty>,<empty>,<empty>
iser iser,<empty>,<empty>,<empty>,<empty>
cxgb3i.00:07:43:05:97:07 cxgb3i,00:07:43:05:97:07,<empty>,<empty>,
<empty>

It is also possible to display the settings of a specific iface configuration in a more friendly way. To do
so, use the option -I iface_name. This will display the settings in the following format:

iface.setting = value

Example 25.8. Using iface Settings with a Chelsio Converged Network Adapter

Using the previous example, the iface settings of the same Chelsio converged network adapter (i.e.
iscsiadm -m iface -I cxgb3i.00:07:43:05:97:07) would appear as:

# BEGIN RECORD 2.0-871


iface.iscsi_ifacename = cxgb3i.00:07:43:05:97:07
iface.net_ifacename = <empty>
iface.ipaddress = <empty>
iface.hwaddress = 00:07:43:05:97:07
iface.transport_name = cxgb3i
iface.initiatorname = <empty>
# END RECORD

25.13.2. Configuring an iface for Software iSCSI


As mentioned earlier, an iface configuration is required for each network object that will be used to bind
a session.

Before

To create an iface configuration for software iSCSI, run the following command:

# iscsiadm -m iface -I iface_name --op=new

This will create a new empty iface configuration with a specified iface_name. If an existing iface
configuration already has the same iface_name, then it will be overwritten with a new, empty one.

To configure a specific setting of an iface configuration, use the following command:

# iscsiadm -m iface -I iface_name --op=update -n iface.setting -v


hw_address

Example 25.9. Set MAC Address of iface0

For example, to set the MAC address (hardware_address) of iface0 to 00:0F:1F:92:6B:BF,


run:

# iscsiadm -m iface -I iface0 --op=update -n iface.hwaddress -v


00:0F:1F:92:6B:BF

224
CHAPTER 25. ONLINE STORAGE MANAGEMENT


WARNING

Do not use default or iser as iface names. Both strings are special values
used by iscsiadm for backward compatibility. Any manually-created iface
configurations named default or iser will disable backwards compatibility.

25.13.3. Configuring an iface for iSCSI Offload


By default, iscsiadm creates an iface configuration for each port. To view available iface
configurations, use the same command for doing so in software iSCSI: iscsiadm -m iface.

Before using the iface of a network card for iSCSI offload, first set the iface.ipaddress value of the
offload interface to the initiator IP address that the interface should use:

For devices that use the be2iscsi driver, the IP address is configured in the BIOS setup
screen.

For all other devices, to configure the IP address of the iface, use:

# iscsiadm -m iface -I iface_name -o update -n iface.ipaddress -v


initiator_ip_address

Example 25.10. Set the iface IP Address of a Chelsio Card

For example, to set the iface IP address to 20.15.0.66 when using a card with the iface name
of cxgb3i.00:07:43:05:97:07, use:

# iscsiadm -m iface -I cxgb3i.00:07:43:05:97:07 -o update -n


iface.ipaddress -v 20.15.0.66

25.13.4. Binding/Unbinding an iface to a Portal


Whenever iscsiadm is used to scan for interconnects, it will first check the iface.transport
settings of each iface configuration in /var/lib/iscsi/ifaces. The iscsiadm utility will then bind
discovered portals to any iface whose iface.transport is tcp.

This behavior was implemented for compatibility reasons. To override this, use the -I iface_name to
specify which portal to bind to an iface, as in:

# iscsiadm -m discovery -t st -p target_IP:port -I iface_name -P 1


[5]

By default, the iscsiadm utility will not automatically bind any portals to iface configurations that use

225
Storage Administration Guide

offloading. This is because such iface configurations will not have iface.transport set to tcp. As
such, the iface configurations need to be manually bound to discovered portals.

It is also possible to prevent a portal from binding to any existing iface. To do so, use default as the
iface_name, as in:

# iscsiadm -m discovery -t st -p IP:port -I default -P 1

To remove the binding between a target and iface, use:

# iscsiadm -m node -targetname proper_target_name -I iface0 --op=delete[6]

To delete all bindings for a specific iface, use:

# iscsiadm -m node -I iface_name --op=delete

To delete bindings for a specific portal (e.g. for Equalogic targets), use:

# iscsiadm -m node -p IP:port -I iface_name --op=delete

NOTE

If there are no iface configurations defined in /var/lib/iscsi/iface and the -I


option is not used, iscsiadm will allow the network subsystem to decide which device a
specific portal should use.

25.14. SCANNING ISCSI INTERCONNECTS


For iSCSI, if the targets send an iSCSI async event indicating new storage is added, then the scan is
done automatically.

However, if the targets do not send an iSCSI async event, you need to manually scan them using the
iscsiadm utility. Before doing so, however, you need to first retrieve the proper --targetname and
the --portal values. If your device model supports only a single logical unit and portal per target, use
iscsiadm to issue a sendtargets command to the host, as in:

# iscsiadm -m discovery -t sendtargets -p target_IP:port


[5]

The output will appear in the following format:

target_IP:port,target_portal_group_tag proper_target_name

Example 25.11. Using iscsiadm to issue a sendtargets Command

For example, on a target with a proper_target_name of iqn.1992-


08.com.netapp:sn.33615311 and a target_IP:port of 10.15.85.19:3260, the output may
appear as:

10.15.84.19:3260,2 iqn.1992-08.com.netapp:sn.33615311

226
CHAPTER 25. ONLINE STORAGE MANAGEMENT

10.15.85.19:3260,3 iqn.1992-08.com.netapp:sn.33615311

In this example, the target has two portals, each using target_ip:ports of 10.15.84.19:3260
and 10.15.85.19:3260.

To see which iface configuration will be used for each session, add the -P 1 option. This option will
print also session information in tree format, as in:

Target: proper_target_name
Portal: target_IP:port,target_portal_group_tag
Iface Name: iface_name

Example 25.12. View iface Configuration

For example, with iscsiadm -m discovery -t sendtargets -p 10.15.85.19:3260 -P


1, the output may appear as:

Target: iqn.1992-08.com.netapp:sn.33615311
Portal: 10.15.84.19:3260,2
Iface Name: iface2
Portal: 10.15.85.19:3260,3
Iface Name: iface2

This means that the target iqn.1992-08.com.netapp:sn.33615311 will use iface2 as its
iface configuration.

With some device models a single target may have multiple logical units and portals. In this case, issue a
sendtargets command to the host first to find new portals on the target. Then, rescan the existing
sessions using:

# iscsiadm -m session --rescan

You can also rescan a specific session by specifying the session's SID value, as in:

# iscsiadm -m session -r SID --rescan[7]

If your device supports multiple targets, you will need to issue a sendtargets command to the hosts to
find new portals for each target. Rescan existing sessions to discover new logical units on existing
sessions using the --rescan option.

227
Storage Administration Guide

IMPORTANT

The sendtargets command used to retrieve --targetname and --portal values


overwrites the contents of the /var/lib/iscsi/nodes database. This database will
then be repopulated using the settings in /etc/iscsi/iscsid.conf. However, this will
not occur if a session is currently logged in and in use.

To safely add new targets/portals or delete old ones, use the -o new or -o delete
options, respectively. For example, to add new targets/portals without overwriting
/var/lib/iscsi/nodes, use the following command:

iscsiadm -m discovery -t st -p target_IP -o new

To delete /var/lib/iscsi/nodes entries that the target did not display during
discovery, use:

iscsiadm -m discovery -t st -p target_IP -o delete

You can also perform both tasks simultaneously, as in:

iscsiadm -m discovery -t st -p target_IP -o delete -o new

The sendtargets command will yield the following output:

ip:port,target_portal_group_tag proper_target_name

Example 25.13. Output of the sendtargets Command

For example, given a device with a single target, logical unit, and portal, with equallogic-iscsi1
as your target_name, the output should appear similar to the following:

10.16.41.155:3260,0 iqn.2001-05.com.equallogic:6-8a0900-ac3fe0101-
63aff113e344a4a2-dl585-03-1

Note that proper_target_name and ip:port,target_portal_group_tag are identical to the


values of the same name in Section 25.6.1, “iSCSI API”.

At this point, you now have the proper --targetname and --portal values needed to manually scan
for iSCSI devices. To do so, run the following command:

# iscsiadm --mode node --targetname proper_target_name --portal


ip:port,target_portal_group_tag \ --login
[8]

Example 25.14. Full iscsiadm Command

Using our previous example (where proper_target_name is equallogic-iscsi1), the full


command would be:

228
CHAPTER 25. ONLINE STORAGE MANAGEMENT

# iscsiadm --mode node --targetname \ iqn.2001-05.com.equallogic:6-


8a0900-ac3fe0101-63aff113e344a4a2-dl585-03-1 \ --portal
10.16.41.155:3260,0 --login[8]

25.15. LOGGING IN TO AN ISCSI TARGET


As mentioned in Section 25.6, “iSCSI”, the iSCSI service must be running in order to discover or log into
targets. To start the iSCSI service, run:

# systemctl start iscsi

When this command is executed, the iSCSI init scripts will automatically log into targets where the
node.startup setting is configured as automatic. This is the default value of node.startup for all
targets.

To prevent automatic login to a target, set node.startup to manual. To do this, run the following
command:

# iscsiadm -m node --targetname proper_target_name -p target_IP:port -o


update -n node.startup -v manual

Deleting the entire record will also prevent automatic login. To do this, run:

# iscsiadm -m node --targetname proper_target_name -p target_IP:port -o


delete

To automatically mount a file system from an iSCSI device on the network, add a partition entry for the
mount in /etc/fstab with the _netdev option. For example, to automatically mount the iSCSI device
sdb to /mount/iscsi during startup, add the following line to /etc/fstab:

/dev/sdb /mnt/iscsi ext3 _netdev 0 0

To manually log in to an iSCSI target, use the following command:

# iscsiadm -m node --targetname proper_target_name -p target_IP:port -l

NOTE

The proper_target_name and target_IP:port refer to the full name and IP


address/port combination of a target. For more information, refer to Section 25.6.1, “iSCSI
API” and Section 25.14, “Scanning iSCSI Interconnects”.

25.16. RESIZING AN ONLINE LOGICAL UNIT


In most cases, fully resizing an online logical unit involves two things: resizing the logical unit itself and
reflecting the size change in the corresponding multipath device (if multipathing is enabled on the
system).

229
Storage Administration Guide

To resize the online logical unit, start by modifying the logical unit size through the array management
interface of your storage device. This procedure differs with each array; as such, consult your storage
array vendor documentation for more information on this.

NOTE

In order to resize an online file system, the file system must not reside on a partitioned
device.

25.16.1. Resizing Fibre Channel Logical Units


After modifying the online logical unit size, re-scan the logical unit to ensure that the system detects the
updated size. To do this for Fibre Channel logical units, use the following command:

$ echo 1 > /sys/block/sdX/device/rescan

IMPORTANT

To re-scan Fibre Channel logical units on a system that uses multipathing, execute the
aforementioned command for each sd device (i.e. sd1, sd2, and so on) that represents a
path for the multipathed logical unit. To determine which devices are paths for a multipath
logical unit, use multipath -ll; then, find the entry that matches the logical unit being
resized. It is advisable that you refer to the WWID of each entry to make it easier to find
which one matches the logical unit being resized.

25.16.2. Resizing an iSCSI Logical Unit


After modifying the online logical unit size, re-scan the logical unit to ensure that the system detects the
updated size. To do this for iSCSI devices, use the following command:

# iscsiadm -m node --targetname target_name -R


[5]

Replace target_name with the name of the target where the device is located.

NOTE

You can also re-scan iSCSI logical units using the following command:

# iscsiadm -m node -R -I interface

Replace interface with the corresponding interface name of the resized logical unit (for
example, iface0). This command performs two operations:

It scans for new devices in the same way that the command echo "- - -" >
/sys/class/scsi_host/host/scan does (refer to Section 25.14, “Scanning
iSCSI Interconnects”).

It re-scans for new/modified logical units the same way that the command echo
1 > /sys/block/sdX/device/rescan does. Note that this command is the
same one used for re-scanning Fibre Channel logical units.

230
CHAPTER 25. ONLINE STORAGE MANAGEMENT

25.16.3. Updating the Size of Your Multipath Device


If multipathing is enabled on your system, you will also need to reflect the change in logical unit size to
the logical unit's corresponding multipath device (after resizing the logical unit). This can be done through
multipathd. To do so, first ensure that multipathd is running using service multipathd
status. Once you've verified that multipathd is operational, run the following command:

# multipathd -k"resize map multipath_device"

The multipath_device variable is the corresponding multipath entry of your device in /dev/mapper.
Depending on how multipathing is set up on your system, multipath_device can be either of two
formats:

mpathX, where X is the corresponding entry of your device (for example, mpath0)

a WWID; for example, 3600508b400105e210000900000490000

To determine which multipath entry corresponds to your resized logical unit, run multipath -ll. This
displays a list of all existing multipath entries in the system, along with the major and minor numbers of
their corresponding devices.

IMPORTANT

Do not use multipathd -k"resize map multipath_device" if there are any


commands queued to multipath_device. That is, do not use this command when the
no_path_retry parameter (in /etc/multipath.conf) is set to "queue", and there
are no active paths to the device.

For more information about multipathing, refer to the Red Hat Enterprise Linux 7 DM Multipath guide.

25.16.4. Changing the Read/Write State of an Online Logical Unit


Certain storage devices provide the user with the ability to change the state of the device from
Read/Write (R/W) to Read-Only (RO), and from RO to R/W. This is typically done through a
management interface on the storage device. The operating system will not automatically update its view
of the state of the device when a change is made. Follow the procedures described in this chapter to
make the operating system aware of the change.

Run the following command, replacing XYZ with the desired device designator, to determine the
operating system's current view of the R/W state of a device:

# blockdev --getro /dev/sdXYZ

The following command is also available for Red Hat Enterprise Linux 7:

# cat /sys/block/sdXYZ/ro 1 = read-only 0 = read-write

When using multipath, refer to the ro or rw field in the second line of output from the multipath -ll
command. For example:

36001438005deb4710000500000640000 dm-8 GZ,GZ500


[size=20G][features=0][hwhandler=0][ro]
\_ round-robin 0 [prio=200][active]

231
Storage Administration Guide

\_ 6:0:4:1 sdax 67:16 [active][ready]


\_ 6:0:5:1 sday 67:32 [active][ready]
\_ round-robin 0 [prio=40][enabled]
\_ 6:0:6:1 sdaz 67:48 [active][ready]
\_ 6:0:7:1 sdba 67:64 [active][ready]

To change the R/W state, use the following procedure:

Procedure 25.12. Change the R/W State

1. To move the device from RO to R/W, see step 2.

To move the device from R/W to RO, ensure no further writes will be issued. Do this by stopping
the application, or through the use of an appropriate, application-specific action.

Ensure that all outstanding write I/Os are complete with the following command:

# blockdev --flushbufs /dev/device

Replace device with the desired designator; for a device mapper multipath, this is the entry for
your device in dev/mapper. For example, /dev/mapper/mpath3.

2. Use the management interface of the storage device to change the state of the logical unit from
R/W to RO, or from RO to R/W. The procedure for this differs with each array. Consult
applicable storage array vendor documentation for more information.

3. Perform a re-scan of the device to update the operating system's view of the R/W state of the
device. If using a device mapper multipath, perform this re-scan for each path to the device
before issuing the command telling multipath to reload its device maps.

This process is explained in further detail in Section 25.16.4.1, “Rescanning Logical Units” .

25.16.4.1. Rescanning Logical Units

After modifying the online logical unit Read/Write state, as described in Section 25.16.4, “Changing the
Read/Write State of an Online Logical Unit”, re-scan the logical unit to ensure the system detects the
updated state with the following command:

# echo 1 > /sys/block/sdX/device/rescan

To re-scan logical units on a system that uses multipathing, execute the above command for each sd
device that represents a path for the multipathed logical unit. For example, run the command on sd1, sd2
and all other sd devices. To determine which devices are paths for a multipath unit, use multipath -
11, then find the entry that matches the logical unit to be changed.

Example 25.15. Use of the multipath -11 Command

For example, the multipath -11 above shows the path for the LUN with WWID
36001438005deb4710000500000640000. In this case, enter:

# echo 1 > /sys/block/sdax/device/rescan


# echo 1 > /sys/block/sday/device/rescan
# echo 1 > /sys/block/sdaz/device/rescan
# echo 1 > /sys/block/sdba/device/rescan

232
CHAPTER 25. ONLINE STORAGE MANAGEMENT

25.16.4.2. Updating the R/W State of a Multipath Device

If multipathing is enabled, after rescanning the logical unit, the change in its state will need to be reflected
in the logical unit's corresponding multipath drive. Do this by reloading the multipath device maps with
the following command:

# multipath -r

The multipath -11 command can then be used to confirm the change.

25.16.4.3. Documentation

Further information can be found in the Red Hat Knowledgebase. To access this, navigate to
https://www.redhat.com/wapps/sso/login.html?redirect=https://access.redhat.com/knowledge/ and log in.
Then access the article at https://access.redhat.com/kb/docs/DOC-32850.

25.17. ADDING/REMOVING A LOGICAL UNIT THROUGH RESCAN-SCSI-


BUS.SH
The sg3_utils package provides the rescan-scsi-bus.sh script, which can automatically update
the logical unit configuration of the host as needed (after a device has been added to the system). The
rescan-scsi-bus.sh script can also perform an issue_lip on supported devices. For more
information about how to use this script, refer to rescan-scsi-bus.sh --help.

To install the sg3_utils package, run yum install sg3_utils.

Known Issues with rescan-scsi-bus.sh


When using the rescan-scsi-bus.sh script, take note of the following known issues:

In order for rescan-scsi-bus.sh to work properly, LUN0 must be the first mapped logical
unit. The rescan-scsi-bus.sh can only detect the first mapped logical unit if it is LUN0. The
rescan-scsi-bus.sh will not be able to scan any other logical unit unless it detects the first
mapped logical unit even if you use the --nooptscan option.

A race condition requires that rescan-scsi-bus.sh be run twice if logical units are mapped
for the first time. During the first scan, rescan-scsi-bus.sh only adds LUN0; all other logical
units are added in the second scan.

A bug in the rescan-scsi-bus.sh script incorrectly executes the functionality for recognizing
a change in logical unit size when the --remove option is used.

The rescan-scsi-bus.sh script does not recognize ISCSI logical unit removals.

25.18. MODIFYING LINK LOSS BEHAVIOR


This section describes how to modify the link loss behavior of devices that use either Fibre Channel or
iSCSI protocols.

25.18.1. Fibre Channel

233
Storage Administration Guide

If a driver implements the Transport dev_loss_tmo callback, access attempts to a device through a link
will be blocked when a transport problem is detected. To verify if a device is blocked, run the following
command:

$ cat /sys/block/device/device/state

This command will return blocked if the device is blocked. If the device is operating normally, this
command will return running.

Procedure 25.13. Determining the State of a Remote Port

1. To determine the state of a remote port, run the following command:

$ cat
/sys/class/fc_remote_port/rport-H:B:R/port_state

2. This command will return Blocked when the remote port (along with devices accessed through
it) are blocked. If the remote port is operating normally, the command will return Online.

3. If the problem is not resolved within dev_loss_tmo seconds, the rport and devices will be
unblocked and all I/O running on that device (along with any new I/O sent to that device) will be
failed.

Procedure 25.14. Changing dev_loss_tmo

To change the dev_loss_tmo value, echo in the desired value to the file. For example, to set
dev_loss_tmo to 30 seconds, run:

$ echo 30 >
/sys/class/fc_remote_port/rport-H:B:R/dev_loss_tmo

For more information about dev_loss_tmo, refer to Section 25.3.1, “Fibre Channel API”.

When a link loss exceeds dev_loss_tmo, the scsi_device and sdN devices are removed. Typically,
the Fibre Channel class will leave the device as is; i.e. /dev/sdx will remain /dev/sdx. This is
because the target binding is saved by the Fibre Channel driver so when the target port returns, the
SCSI addresses are recreated faithfully. However, this cannot be guaranteed; the sdx will be restored
only if no additional change on in-storage box configuration of LUNs is made.

25.18.2. iSCSI Settings with dm-multipath

If dm-multipath is implemented, it is advisable to set iSCSI timers to immediately defer commands to


the multipath layer. To configure this, nest the following line under device { in
/etc/multipath.conf:

features "1 queue_if_no_path"

This ensures that I/O errors are retried and queued if all paths are failed in the dm-multipath layer.

You may need to adjust iSCSI timers further to better monitor your SAN for problems. Available iSCSI
timers you can configure are NOP-Out Interval/Timeouts and replacement_timeout, which are
discussed in the following sections.

234
CHAPTER 25. ONLINE STORAGE MANAGEMENT

25.18.2.1. NOP-Out Interval/Timeout

To help monitor problems the SAN, the iSCSI layer sends a NOP-Out request to each target. If a NOP-
Out request times out, the iSCSI layer responds by failing any running commands and instructing the
SCSI layer to requeue those commands when possible.

When dm-multipath is being used, the SCSI layer will fail those running commands and defer them to
the multipath layer. The multipath layer then retries those commands on another path. If dm-multipath
is not being used, those commands are retried five times before failing altogether.

Intervals between NOP-Out requests are 10 seconds by default. To adjust this, open
/etc/iscsi/iscsid.conf and edit the following line:

node.conn[0].timeo.noop_out_interval = [interval value]

Once set, the iSCSI layer will send a NOP-Out request to each target every [interval value] seconds.

By default, NOP-Out requests time out in 10 seconds[9]. To adjust this, open


/etc/iscsi/iscsid.conf and edit the following line:

node.conn[0].timeo.noop_out_timeout = [timeout value]

This sets the iSCSI layer to timeout a NOP-Out request after [timeout value] seconds.

SCSI Error Handler

If the SCSI Error Handler is running, running commands on a path will not be failed immediately when a
NOP-Out request times out on that path. Instead, those commands will be failed after
replacement_timeout seconds. For more information about replacement_timeout, refer to
Section 25.18.2.2, “replacement_timeout”.

To verify if the SCSI Error Handler is running, run:

# iscsiadm -m session -P 3

25.18.2.2. replacement_timeout

replacement_timeout controls how long the iSCSI layer should wait for a timed-out path/session to
reestablish itself before failing any commands on it. The default replacement_timeout value is 120
seconds.

To adjust replacement_timeout, open /etc/iscsi/iscsid.conf and edit the following line:

node.session.timeo.replacement_timeout = [replacement_timeout]

The 1 queue_if_no_path option in /etc/multipath.conf sets iSCSI timers to immediately defer


commands to the multipath layer (refer to Section 25.18.2, “iSCSI Settings with dm-multipath”). This
setting prevents I/O errors from propagating to the application; because of this, you can set
replacement_timeout to 15-20 seconds.

By configuring a lower replacement_timeout, I/O is quickly sent to a new path and executed (in the
event of a NOP-Out timeout) while the iSCSI layer attempts to re-establish the failed path/session. If all
paths time out, then the multipath and device mapper layer will internally queue I/O based on the settings

235
Storage Administration Guide

in /etc/multipath.conf instead of /etc/iscsi/iscsid.conf.

IMPORTANT

Whether your considerations are failover speed or security, the recommended value for
replacement_timeout will depend on other factors. These factors include the network,
target, and system workload. As such, it is recommended that you thoroughly test any
new configurations to replacements_timeout before applying it to a mission-critical
system.

25.18.3. iSCSI Root


When accessing the root partition directly through an iSCSI disk, the iSCSI timers should be set so that
iSCSI layer has several chances to try to reestablish a path/session. In addition, commands should not
be quickly re-queued to the SCSI layer. This is the opposite of what should be done when dm-
multipath is implemented.

To start with, NOP-Outs should be disabled. You can do this by setting both NOP-Out interval and
timeout to zero. To set this, open /etc/iscsi/iscsid.conf and edit as follows:

node.conn[0].timeo.noop_out_interval = 0
node.conn[0].timeo.noop_out_timeout = 0

In line with this, replacement_timeout should be set to a high number. This will instruct the system to
wait a long time for a path/session to reestablish itself. To adjust replacement_timeout, open
/etc/iscsi/iscsid.conf and edit the following line:

node.session.timeo.replacement_timeout = replacement_timeout

After configuring /etc/iscsi/iscsid.conf, you must perform a re-discovery of the affected storage.
This will allow the system to load and use any new values in /etc/iscsi/iscsid.conf. For more
information on how to discover iSCSI devices, refer to Section 25.14, “Scanning iSCSI Interconnects”.

Configuring Timeouts for a Specific Session

You can also configure timeouts for a specific session and make them non-persistent (instead of using
/etc/iscsi/iscsid.conf). To do so, run the following command (replace the variables accordingly):

# iscsiadm -m node -T target_name -p target_IP:port -o update -n


node.session.timeo.replacement_timeout -v $timeout_value

IMPORTANT

The configuration described here is recommended for iSCSI sessions involving root
partition access. For iSCSI sessions involving access to other types of storage (namely, in
systems that use dm-multipath), refer to Section 25.18.2, “iSCSI Settings with dm-
multipath”.

25.19. CONTROLLING THE SCSI COMMAND TIMER AND DEVICE


STATUS

236
CHAPTER 25. ONLINE STORAGE MANAGEMENT

The Linux SCSI layer sets a timer on each command. When this timer expires, the SCSI layer will
quiesce the host bus adapter (HBA) and wait for all outstanding commands to either time out or
complete. Afterwards, the SCSI layer will activate the driver's error handler.

When the error handler is triggered, it attempts the following operations in order (until one successfully
executes):

1. Abort the command.

2. Reset the device.

3. Reset the bus.

4. Reset the host.

If all of these operations fail, the device will be set to the offline state. When this occurs, all I/O to that
device will be failed, until the problem is corrected and the user sets the device to running.

The process is different, however, if a device uses the Fibre Channel protocol and the rport is blocked.
In such cases, the drivers wait for several seconds for the rport to become online again before
activating the error handler. This prevents devices from becoming offline due to temporary transport
problems.

Device States
To display the state of a device, use:

$ cat /sys/block/device-name/device/state

To set a device to the running state, use:

# echo running > /sys/block/device-name/device/state

Command Timer
To control the command timer, modify the /sys/block/device-name/device/timeout file:

# echo value > /sys/block/device-name/device/timeout

Replace value in the command with the timeout value, in seconds, that you want to implement.

25.20. TROUBLESHOOTING ONLINE STORAGE CONFIGURATION


This section provides solution to common problems users experience during online storage
reconfiguration.

Logical unit removal status is not reflected on the host.


When a logical unit is deleted on a configured filer, the change is not reflected on the host. In such
cases, lvm commands will hang indefinitely when dm-multipath is used, as the logical unit has
now become stale.

To work around this, perform the following procedure:

Procedure 25.15. Working Around Stale Logical Units

237
Storage Administration Guide

Procedure 25.15. Working Around Stale Logical Units

1. Determine which mpath link entries in /etc/lvm/cache/.cache are specific to the stale
logical unit. To do this, run the following command:

$ ls -l /dev/mpath | grep stale-logical-unit

Example 25.16. Determine Specific mpath Link Entries

For example, if stale-logical-unit is 3600d0230003414f30000203a7bc41a00, the


following results may appear:

lrwxrwxrwx 1 root root 7 Aug 2 10:33


/3600d0230003414f30000203a7bc41a00 -> ../dm-4
lrwxrwxrwx 1 root root 7 Aug 2 10:33
/3600d0230003414f30000203a7bc41a00p1 -> ../dm-5

This means that 3600d0230003414f30000203a7bc41a00 is mapped to two mpath links:


dm-4 and dm-5.

2. Next, open /etc/lvm/cache/.cache. Delete all lines containing stale-logical-unit


and the mpath links that stale-logical-unit maps to.

Example 25.17. Delete Relevant Lines

Using the same example in the previous step, the lines you need to delete are:

/dev/dm-4
/dev/dm-5
/dev/mapper/3600d0230003414f30000203a7bc41a00
/dev/mapper/3600d0230003414f30000203a7bc41a00p1
/dev/mpath/3600d0230003414f30000203a7bc41a00
/dev/mpath/3600d0230003414f30000203a7bc41a00p1

25.21. CONFIGURING MAXIMUM TIME FOR ERROR RECOVERY WITH


EH_DEADLINE

238
CHAPTER 25. ONLINE STORAGE MANAGEMENT

IMPORTANT

In most scenarios, you do not need to enable the eh_deadline parameter. Using the
eh_deadline parameter can be useful in certain specific scenarios, for example if a link
loss occurs between a Fibre Channel switch and a target port, and the Host Bus Adapter
(HBA) does not receive Registered State Change Notifications (RSCNs). In such a case,
I/O requests and error recovery commands all time out rather than encounter an error.
Setting eh_deadline in this environment puts an upper limit on the recovery time, which
enables the failed I/O to be retried on another available path by multipath.

However, if RSCNs are enabled, the HBA does not register the link becoming
unavailable, or both, the eh_deadline functionality provides no additional benefit, as the
I/O and error recovery commands fail immediately, which allows multipath to retry.

The SCSI host object eh_deadline parameter enables you to configure the maximum amount of time
that the SCSI error handling mechanism attempts to perform error recovery before stopping and resetting
the entire HBA.

The value of the eh_deadline is specified in seconds. The default setting is off, which disables the
time limit and allows all of the error recovery to take place. In addition to using sysfs, a default value
can be set for all SCSI HBAs by using the scsi_mod.eh_deadline kernel parameter.

Note that when eh_deadline expires, the HBA is reset, which affects all target paths on that HBA, not
only the failing one. As a consequence, I/O errors can occur if some of the redundant paths are not
available for other reasons. Enable eh_deadline only if you have a fully redundant multipath
configuration on all targets.

[5] The target_IP and port variables refer to the IP address and port combination of a target/portal, respectively. For
more information, refer to Section 25.6.1, “iSCSI API” and Section 25.14, “Scanning iSCSI Interconnects” .

[6] Refer to Section 25.14, “Scanning iSCSI Interconnects” for information on proper_target_name .

[7] For information on how to retrieve a session's SID value, refer to Section 25.6.1, “iSCSI API”.

[8] This is a single command split into multiple lines, to accommodate printed and PDF versions of this document.
All concatenated lines — preceded by the backslash (\) — should be treated as one command, sans backslashes.

[9] Prior to Red Hat Enterprise Linux 5.4, the default NOP-Out requests time out was 15 seconds.

239
Storage Administration Guide

CHAPTER 26. DEVICE MAPPER MULTIPATHING AND VIRTUAL


STORAGE
Red Hat Enterprise Linux 7 supports DM-Multipath and virtual storage. Both features are documented in
detail in the Red Hat books DM Multipath and Virtualization Deployment and Administration Guide.

26.1. VIRTUAL STORAGE


Red Hat Enterprise Linux 7 supports the following file systems/online storage methods for virtual storage:

Fibre Channel

iSCSI

NFS

GFS2

Virtualization in Red Hat Enterprise Linux 7 uses libvirt to manage virtual instances. The libvirt
utility uses the concept of storage pools to manage storage for virtualized guests. A storage pool is
storage that can be divided up into smaller volumes or allocated directly to a guest. Volumes of a storage
pool can be allocated to virtualized guests. There are two categories of storage pools available:

Local storage pools


Local storage covers storage devices, files or directories directly attached to a host. Local storage
includes local directories, directly attached disks, and LVM Volume Groups.

Networked (shared) storage pools


Networked storage covers storage devices shared over a network using standard protocols. It
includes shared storage devices using Fibre Channel, iSCSI, NFS, GFS2, and SCSI RDMA
protocols, and is a requirement for migrating guest virtualized guests between hosts.

IMPORTANT

For comprehensive information on the deployment and configuration of virtual storage


instances in your environment, refer to the Virtualization Deployment and Administration
Guide provided by Red Hat.

26.2. DM-MULTIPATH
Device Mapper Multipathing (DM-Multipath) is a feature that allows you to configure multiple I/O paths
between server nodes and storage arrays into a single device. These I/O paths are physical SAN
connections that can include separate cables, switches, and controllers. Multipathing aggregates the I/O
paths, creating a new device that consists of the aggregated paths.

DM-Multipath are used primarily for the following reasons:

Redundancy
DM-Multipath can provide failover in an active/passive configuration. In an active/passive
configuration, only half the paths are used at any time for I/O. If any element of an I/O path (the cable,
switch, or controller) fails, DM-Multipath switches to an alternate path.

240
CHAPTER 26. DEVICE MAPPER MULTIPATHING AND VIRTUAL STORAGE

Improved Performance
DM-Multipath can be configured in active/active mode, where I/O is spread over the paths in a round-
robin fashion. In some configurations, DM-Multipath can detect loading on the I/O paths and
dynamically re-balance the load.

IMPORTANT

For comprehensive information on the deployment and configuration of DM Multipath in


your environment, refer to the DM Multipath guide provided by Red Hat.

241
Storage Administration Guide

CHAPTER 27. EXTERNAL ARRAY MANAGEMENT


(LIBSTORAGEMGMT)
Red Hat Enterprise Linux 7 ships with a new external array management library called
libStorageMgmt.

27.1. INTRODUCTION TO LIBSTORAGEMGMT


The libStorageMgmt library is a storage array independent Application Programming Interface (API).
It provides a stable and consistent API that allows developers the ability to programmatically manage
different storage arrays and leverage the hardware accelerated features provided.

This library is used as a building block for other higher level management tools and applications. End
system administrators can also use it as a tool to manually manage storage and automate storage
management tasks with the use of scripts.

The libStorageMgmt library allows operations such as:

List storage pools, volumes, access groups, or file systems.

Create and delete volumes, access groups, file systems, or NFS exports.

Grant and remove access to volumes, access groups, or initiators.

Replicate volumes with snapshots, clones, and copies.

Create and delete access groups and edit members of a group.

Server resources such as CPU and interconnect bandwidth are not utilized because the operations are
all done on the array.

The libstoragemgmt package provides:

A stable C and Python API for client application and plug-in developers.

A command-line interface that utilizes the library (lsmcli).

A daemon that executes the plug-in (lsmd).

A simulator plug-in that allows the testing of client applications (sim).

Plug-in architecture for interfacing with arrays.

242
CHAPTER 27. EXTERNAL ARRAY MANAGEMENT (LIBSTORAGEMGMT)


WARNING

This library and its associated tool have the ability to destroy any and all data
located on the arrays it manages. It is highly recommended to develop and test
applications and scripts against the storage simulator plug-in to remove any logic
errors before working with production systems. Testing applications and scripts on
actual non-production hardware before deploying to production is also strongly
encouraged if possible.

The libStorageMgmt library in Red Hat Enterprise Linux 7 adds a default udev rule to handle the
REPORTED LUNS DATA HAS CHANGED unit attention.

When a storage configuration change has taken place, one of several Unit Attention ASC/ASCQ codes
reports the change. A uevent is then generated and is rescanned automatically with sysfs.

The file /lib/udev/rules.d/90-scsi-ua.rules contains example rules to enumerate other


events that the kernel can generate.

The libStorageMgmt library uses a plug-in architecture to accommodate differences in storage arrays.
For more information on libStorageMgmt plug-ins and how to write them, refer to the Red Hat
Developer Guide.

27.2. LIBSTORAGEMGMT TERMINOLOGY


Different array vendors and storage standards use different terminology to refer to similar functionality.
This library uses the following terminology.

Storage array
Any storage system that provides block access (FC, FCoE, iSCSI) or file access through Network
Attached Storage (NAS).

Volume
Storage Area Network (SAN) Storage Arrays can expose a volume to the Host Bus Adapter (HBA)
over different transports, such as FC, iSCSI, or FCoE. The host OS treats it as block devices. One
volume can be exposed to many disks if multipath[2] is enabled).

This is also known as the Logical Unit Number (LUN), StorageVolume with SNIA terminology, or
virtual disk.

Pool
A group of storage spaces. File systems or volumes can be created from a pool. Pools can be
created from disks, volumes, and other pools. A pool may also hold RAID settings or thin provisioning
settings.

This is also known as a StoragePool with SNIA Terminology.

Snapshot
A point in time, read only, space efficient copy of data.

243
Storage Administration Guide

This is also known as a read only snapshot.

Clone
A point in time, read writeable, space efficient copy of data.

This is also known as a read writeable snapshot.

Copy
A full bitwise copy of the data. It occupies the full space.

Mirror
A continuously updated copy (synchronous and asynchronous).

Access group
Collections of iSCSI, FC, and FCoE initiators which are granted access to one or more storage
volumes. This ensures that only storage volumes are accessibly by the specified initiators.

This is also known as an initiator group.

Access Grant
Exposing a volume to a specified access group or initiator. The libStorageMgmt library currently
does not support LUN mapping with the ability to choose a specific logical unit number. The
libStorageMgmt library allows the storage array to select the next available LUN for assignment. If
configuring a boot from SAN or masking more than 256 volumes be sure to read the OS, Storage
Array, or HBA documents.

Access grant is also known as LUN Masking.

System
Represents a storage array or a direct attached storage RAID.

File system
A Network Attached Storage (NAS) storage array can expose a file system to host an OS through an
IP network, using either NFS or CIFS protocol. The host OS treats it as a mount point or a folder
containing files depending on the client operating system.

Disk
The physical disk holding the data. This is normally used when creating a pool with RAID settings.

This is also known as a DiskDrive using SNIA Terminology.

Initiator
In Fibre Channel (FC) or Fibre Channel over Ethernet (FCoE), the intiator is the World Wide Port
Name (WWPN) or World Wide Node Name (WWNN). In iSCSI, the initiator is the iSCSI Qualified
Name (IQN). In NFS or CIFS, the initiator is the host name or the IP address of the host.

Child dependency
Some arrays have an implicit relationship between the origin (parent volume or file system) and the
child (such as a snapshot or a clone). For example, it is impossible to delete the parent if it has one
or more depend children. The API provides methods to determine if any such relationship exists and

244
CHAPTER 27. EXTERNAL ARRAY MANAGEMENT (LIBSTORAGEMGMT)

a method to remove the dependency by replicating the required blocks.

27.3. INSTALLING LIBSTORAGEMGMT


To install libStorageMgmt for use of the command line, required run-time libraries and simulator plug-
ins, use the following command:

$ sudo yum install libstoragemgmt libstoragemgmt-python

To develop C applications that utilize the library, install the libstoragemgmt-devel package with the
following command:

# yum install libstoragemgmt-devel

To install libStorageMgmt for use with hardware arrays, select one or more of the appropriate plug-in
packages with the following command:

$ sudo yum install libstoragemgmt-name-plugin

The following plug-ins that are available include:

libstoragemgmt-smis-plugin
Generic SMI-S array support.

libstoragemgmt-netapp-plugin
Specific support for NetApp files.

libstoragemgmt-nstor-plugin
Specific support for NexentaStor.

libstoragemgmt-targetd-plugin
Specific support for targetd.

The daemon is then installed and configured to run at start up but will not do so until the next reboot. To
use it immediately without rebooting, start the daemon manually.

To manage an array requires support through a plug-in. The base install package includes open source
plug-ins for a number of different vendors. Additional plug-in packages will be available separately as
array support improves. Currently supported hardware is constantly changing and improving.

The libStorageMgmt daemon (lsmd) behaves like any standard service for the system.

To check on the status of the libStorageMgmt service, use:

$ sudo systemctl status libstoragemgmt

To stop the service use:

$ sudo systemctl stop libstoragemgmt

245
Storage Administration Guide

To start the service use:

$ sudo systemctl start libstoragemgmt

27.4. USING LIBSTORAGEMGMT


To use libStorageMgmt interactively, use the lsmcli tool.

The lsmcli tool requires two things to run:

A Uniform Resource Identifier (URI) which is used to identify the plug-in to connect to the array
and any configurable options the array requires.

A valid user name and password for the array.

URI has the following form:

plugin+optional-transport://user-name@host:port/?query-string-parameters

Each plug-in has different requirements for what is needed.

Example 27.1. Examples of Different Plug-in Requirements

Simulator Plug-in That Requires No User Name or Password


sim://

NetApp Plug-in over SSL with User Name root


ontap+ssl://[email protected]/

SMI-S Plug-in over SSL for EMC Array


smis+ssl://[email protected]:5989/?namespace=root/emc

There are three options to use the URI:

1. Pass the URI as part of the command.

$ lsmcli -u sim://...

2. Store the URI in an environmental vairable.

$ export LSMCLI_URI=sim:// && lsmcli ...

3. Place the URI in the file ~/.lsmcli, which contains name-value pairs separated by "=". The
only currently supported configuration is 'uri'.

Determining which URI to use needs to be done in this order. If all three are supplied, only the first one
on the command line will be used.

Supply the password by specifying the -P option on the command line or by placing it in an
environmental variable LSMCLI_PASSWORD.

246
CHAPTER 27. EXTERNAL ARRAY MANAGEMENT (LIBSTORAGEMGMT)

Example 27.2. Examples of lsmcli

An example for using the command line to create a new volume and making it visible to an initiator.

List arrays that are serviced by this connection.

$ lsmcli list --type SYSTEMS


ID | Name | Status
-------+-------------------------------+--------
sim-01 | LSM simulated storage plug-in | OK

List storage pools.

$ lsmcli list --type POOLS -H


ID | Name | Total space | Free space |
System ID
-----+---------------+----------------------+----------------------
+-----------
POO2 | Pool 2 | 18446744073709551616 | 18446744073709551616 |
sim-01
POO3 | Pool 3 | 18446744073709551616 | 18446744073709551616 |
sim-01
POO1 | Pool 1 | 18446744073709551616 | 18446744073709551616 |
sim-01
POO4 | lsm_test_aggr | 18446744073709551616 | 18446744073709551616 |
sim-01

Create a volume.

$ lsmcli volume-create --name volume_name --size 20G --pool POO1 -H


ID | Name | vpd83 | bs | #blocks
| status | ...
-----+-------------+----------------------------------+-----+-------
---+--------+----
Vol1 | volume_name | F7DDF7CA945C66238F593BC38137BD2F | 512 | 41943040 |
OK | ...

Create an access group with an iSCSI initiator in it.

$ lsmcli --create-access-group example_ag --id iqn.1994-


05.com.domain:01.89bd01 --type ISCSI --system sim-01
ID | Name | Initiator ID
|SystemID
---------------------------------+------------+---------------------
-------------+--------
782d00c8ac63819d6cca7069282e03a0 | example_ag | iqn.1994-
05.com.domain:01.89bd01 |sim-01

Create an access group with an iSCSI intiator in it:

$ lsmcli access-group-create --name example_ag --init iqn.1994-


05.com.domain:01.89bd01 --init-type ISCSI --sys sim-01
ID | Name | Initiator IDs
| System ID

247
Storage Administration Guide

---------------------------------+------------+---------------------
-------------+-----------
782d00c8ac63819d6cca7069282e03a0 | example_ag | iqn.1994-
05.com.domain:01.89bd01 | sim-01

Allow the access group visibility to the newly created volume:

$ lsmcli access-group-grant --ag 782d00c8ac63819d6cca7069282e03a0 --vol


Vol1 --access RW

The design of the library provides for a process separation between the client and the plug-in by means
of inter-process communication (IPC). This prevents bugs in the plug-in from crashing the client
application. It also provides a means for plug-in writers to write plug-ins with a license of their own
choosing. When a client opens the library passing a URI, the client library looks at the URI to determine
which plug-in should be used.

The plug-ins are technically stand alone applications but they are designed to have a file descriptor
passed to them on the command line. The client library then opens the appropriate Unix domain socket
which causes the daemon to fork and execute the plug-in. This gives the client library a point to point
communcation channel with the plug-in. The daemon can be restarted without affecting existing clients.
While the client has the library open for that plug-in, the plug-in process is running. After one or more
commands are sent and the plug-in is closed, the plug-in process cleans up and then exits.

The default behavior of lsmcli is to wait until the operation is completee. Depending on the requested
operations, this could potentially could take many hours. To allow a return to normal usage, it is possible
to use the -b option on the command line. If the exit code is 0 the command is completed. If the exit
code is 7 the command is in progress and a job identifier is written to standard output. The user or script
can then take the job ID and query the status of the command as needed by using lsmcli --
jobstatus JobID. If the job is now completed, the exit value will be 0 and the results printed to
standard output. If the command is still in progress, the return value will be 7 and the percentage
complete will be printed to the standard output.

Example 27.3. An Asynchronous Example

Create a volume passing the -b option so that the command returns immediately.

$ lsmcli volume-create --name async_created --size 20G --pool POO1 -b


JOB_3

Check to see what the exit value was, remembering that 7 indicates the job is still in progress.

$ echo $?
7

Check to see if the job is completed.

$ lsmcli job-status --job JOB_3


33

Check to see what the exit value was, remembering that 7 indicates the job is still in progress so the
standard output is the percentage done or 33% based on the above screen.

$ echo $?

248
CHAPTER 27. EXTERNAL ARRAY MANAGEMENT (LIBSTORAGEMGMT)

Wait some more and check it again, remembering that exit 0 means success and standard out
displays the new volume.

$ lsmcli job-status --job JOB_3


ID | Name | vpd83 | Block Size
| ...
-----+---------------+----------------------------------+-----------
--+-----
Vol2 | async_created | 855C9BA51991B0CC122A3791996F6B15 | 512 |
...

For scripting, pass the -t SeparatorCharacters option. This will make it easier to parse the output.

Example 27.4. Scripting Examples

$ lsmcli list --type volumes -t#


Vol1#volume_name#049167B5D09EC0A173E92A63F6C3EA2A#512#41943040#214748364
80#OK#sim-01#POO1
Vol2#async_created#3E771A2E807F68A32FA5E15C235B60CC#512#41943040#2147483
6480#OK#sim-01#POO1

$ lsmcli list --type volumes -t " | "


Vol1 | volume_name | 049167B5D09EC0A173E92A63F6C3EA2A | 512 | 41943040 |
21474836480 | OK | 21474836480 | sim-01 | POO1
Vol2 | async_created | 3E771A2E807F68A32FA5E15C235B60CC | 512 | 41943040
| 21474836480 | OK | sim-01 | POO1

$ lsmcli list --type volumes -s


---------------------------------------------
ID | Vol1
Name | volume_name
VPD83 | 049167B5D09EC0A173E92A63F6C3EA2A
Block Size | 512
#blocks | 41943040
Size | 21474836480
Status | OK
System ID | sim-01
Pool ID | POO1
---------------------------------------------
ID | Vol2
Name | async_created
VPD83 | 3E771A2E807F68A32FA5E15C235B60CC
Block Size | 512
#blocks | 41943040
Size | 21474836480
Status | OK
System ID | sim-01
Pool ID | POO1
---------------------------------------------

249
Storage Administration Guide

It is recommended to use the Python library for non-trivial scripting.

For more information on lsmcli, refer to the man pages or the command lsmcli --help.

250
CHAPTER 28. PERSISTENT MEMORY: NVDIMMS

CHAPTER 28. PERSISTENT MEMORY: NVDIMMS


Persistent memory (pmem), also called as storage class memory, is a combination of memory and
storage. pmem combines the durability of storage with the low access latency and the high bandwidth of
dynamic RAM (DRAM):

Persistent memory is byte-addressable, so it can be accessed by using CPU load and store
instructions. In addition to read() or write() system calls that are required for accessing
traditional block-based storage, pmem also supports direct load and store programming model.

The performance characteristics of persistent memory are similar to DRAM with very low access
latency, typically in the tens to hundreds of nanoseconds.

Contents of persistent memory are preserved when the power is off, like with storage.

Using persistent memory is beneficial for use cases like:

Rapid start: data set is already in memory.


Rapid start is also called the warm cache effect. A file server has none of the file contents in memory
after starting. As clients connect and read and write data, that data is cached in the page cache.
Eventually, the cache contains mostly hot data. After a reboot, the system must start the process
again.

Persistent memory allows an application to keep the warm cache across reboots if the application is
designed properly. In this instance, there would be no page cache involved: the application would
cache data directly in the persistent memory.

Fast write-cache
File servers often do not acknowledge a client's write request until the data is on durable media. Using
persistent memory as a fast write cache enables a file server to acknowledge the write request
quickly thanks to the low latency of pmem.

NVDIMMs Interleaving
Non-Volatile Dual In-line Memory Modules (NVDIMMs) can be grouped into interleave sets in the same
way as regular DRAM. An interleave set is like a RAID 0 (stripe) across multiple DIMMs.

Following are the advantages of NVDIMMS interleaving:

Like DRAM, NVDIMMs benefit from increased performance when they are configured into
interleave sets.

It can be used to combine multiple smaller NVDIMMs into one larger logical device.

Use the system BIOS or UEFI firmware to configure interleave sets.

In Linux, one region device is created per interleave set.

Following is the relation between region devices and labels:

If your ⁠NVDIMMs support labels, the region device can be further subdivided into namespaces.

If your NVDIMMs do not support labels, the region devices can only contain a single namespace.
In this case, the kernel creates a default namespace which covers the entire region.

Persistent Memory Access Modes

251
Storage Administration Guide

You can use persistent memory in sector, memory, dax (Direct Access) or raw mode:

sector mode
It presents the storage as a fast block device. Using sector mode is useful for legacy applications that
have not been modified to use persistent memory, or for applications that make use of the full I/O
stack, including the Device Mapper.

memory mode
It enables persistent memory devices to support direct access programming as described in the
Storage Networking Industry Association (SNIA) Non-Volatile Memory (NVM) Programming Model
specification. In memory mode, I/O bypasses the storage stack of the kernel, and many Device
Mapper drivers therefore cannot be used.

dax mode
The dax mode,also called device DAX, provides raw access to persistent memory by using a DAX
character device node. Data on a DAX device can be made durable using CPU cache flushing and
fencing instructions. Certain databases and virtual machine hypervisors might benefit from DAX
mode. File systems cannot be created on device dax instances.

raw mode
The raw mode namespaces have several limitations and should not be used.

28.1. CONFIGURING PERSISTENT MEMORY WITH NDCTL


Use the ndctl utility to configure persistent memory devices. To install ndctl utility, use the following
command:

# yum install ndctl

Procedure 28.1. Configuring Persistent Memory for device that does not support labels

1. List the available pmem regions on your system. In the following example, the command lists an
NVDIMM-N device that does not support labels:

# ndctl list --regions


[
{
"dev":"region1",
"size":34359738368,
"available_size":0,
"type":"pmem"
},
{
"dev":"region0",
"size":34359738368,
"available_size":0,
"type":"pmem"
}
]

252
CHAPTER 28. PERSISTENT MEMORY: NVDIMMS

OS creates a default namespace for each region because the NVDIMM-N device here does not
support labels. Hence, the available size is 0 bytes.

2. List all the inactive namespaces on your system:

# ndctl list --namespaces --idle


[
{
"dev":"namespace1.0",
"mode":"raw",
"size":34359738368,
"state":"disabled",
"numa_node":1
},
{
"dev":"namespace0.0",
"mode":"raw",
"size":34359738368,
"state":"disabled",
"numa_node":0
}
]

3. Reconfigure the inactive namespaces in order to make use of this space. For example, to use
namespace0.0 for a file system that supports DAX, use the following command:

# ndctl create-namespace --force --reconfig=namespace0.0 --


mode=memory --map=mem
{
"dev":"namespace0.0",
"mode":"memory",
"size":"32.00 GiB (34.36 GB)",
"uuid":"ab91cc8f-4c3e-482e-a86f-78d177ac655d",
"blockdev":"pmem0",
"numa_node":0
}

Procedure 28.2. Configuring Persistent Memory for device that support labels

1. List the available pmem regions on your system. In the following example, the command lists an
NVDIMM-N device that support labels:

# ndctl list --regions


[
{
"dev":"region5",
"size":270582939648,
"available_size":270582939648,
"type":"pmem",
"iset_id":-7337419320239190016
},
{
"dev":"region4",
"size":270582939648,
"available_size":270582939648,

253
Storage Administration Guide

"type":"pmem",
"iset_id":-137289417188962304
}
]

2. If an NVDIMM support labels, default namespaces are not created, and you can allocate one or
more namespaces from a region without using the --force or --reconfigure flags:

# ndctl create-namespace --region=region4 --mode=memory --map=dev --


size=36G
{
"dev":"namespace4.0",
"mode":"memory",
"size":"35.44 GiB (38.05 GB)",
"uuid":"9c5330b5-dc90-4f7a-bccd-5b558fa881fe",
"blockdev":"pmem4",
"numa_node":0
}

Now, you can create another namespace from the same region:

# ndctl create-namespace --region=region4 --mode=memory --map=dev --


size=36G
{
"dev":"namespace4.1",
"mode":"memory",
"size":"35.44 GiB (38.05 GB)",
"uuid":"91868e21-830c-4b8f-a472-353bf482a26d",
"blockdev":"pmem4.1",
"numa_node":0
}

You can also create namespaces of different types from the same region, using the following
command:

# ndctl create-namespace --region=region4 --mode=dax --align=2M --


size=36G
{
"dev":"namespace4.2",
"mode":"dax",
"size":"35.44 GiB (38.05 GB)",
"uuid":"a188c847-4153-4477-81bb-7143e32ffc5c",
"daxregion":
{
"id":4,
"size":"35.44 GiB (38.05 GB)",
"align":2097152,
"devices":[
{
"chardev":"dax4.2",
"size":"35.44 GiB (38.05 GB)"
}]
},
"numa_node":0
}

254
CHAPTER 28. PERSISTENT MEMORY: NVDIMMS

For more information on ndctl utility, see man ndctl.

28.2. CONFIGURING PERSISTENT MEMORY FOR USE AS A BLOCK


DEVICE (LEGACY MODE)
To use persistent memory as a fast block device, set the namespace to sector mode.

# ndctl create-namespace --force --reconfig=namespace1.0 --mode=sector


{
"dev":"namespace1.0",
"mode":"sector",
"size":17162027008,
"uuid":"029caa76-7be3-4439-8890-9c2e374bcc76",
"sector_size":4096,
"blockdev":"pmem1s"
}

In the example, namespace1.0 is reconfigured to sector mode. Note that the block device name
changed from pmem1 to pmem1s. This device can be used in the same way as any other block device on
the system. For example, the device can be partitioned, you can create a file system on the device, the
device can be configured as part of a software RAID set, and the device can be the cache device for dm-
cache.

28.3. CONFIGURING PERSISTENT MEMORY FOR FILE SYSTEM


DIRECT ACCESS (DAX)
Direct access requires the namespace to be configured to memory mode. Memory mode allows for the
direct access programming model. When a device is configured in memory mode, a file system can be
created on top of it, and then mounted with the -o dax mount option. Then, any application that
performs an mmap() operation on a file on this file system gets direct access to its storage. See the
following example:

# ndctl create-namespace --force --reconfig=namespace0.0 --mode=memory --


map=mem
{
"dev":"namespace0.0",
"mode":"memory",
"size":17177772032,
"uuid":"e6944638-46aa-4e06-a722-0b3f16a5acbf",
"blockdev":"pmem0"
}

In the example, namespace0.0 is converted to namespace memory mode. With the --map=mem
argument, ndctl puts operating system data structures used for Direct Memory Access (DMA) in system
DRAM.

To perform DMA, the kernel requires a data structure for each page in the memory region. The overhead
of this data structure is 64 bytes per 4-KiB page. For small devices, the amount of overhead is small
enough to fit in DRAM with no problems. For example, the 16-GiB namespace only requires 256MiB for
page structures. Because the NVDIMM is small and expensive, storing the kernel’s page tracking data
structures in DRAM is preferable, as indicated by the --map=mem parameter.

255
Storage Administration Guide

In the future, NVDIMM devices might be terabytes in size. For such devices, the amount of memory
required to store the page tracking data structures might exceed the amount of DRAM in the system. One
TiB of persistent memory requires 16 GiB just for page structures. As a result, specifying the --map=dev
parameter to store the data structures in the persistent memory itself is preferable in such cases.

After configuring the namespace in memory mode, the namespace is ready for a file system. Starting
with Red Hat Enterprise Linux 7.3, both the Ext4 and XFS file system enable using persistent memory as
a Technology Preview. File system creation requires no special arguments. To get the DAX functionality,
mount the file system with the dax mount option. For example:

# mkfs -t xfs /dev/pmem0


# mount -o dax /dev/pmem0 /mnt/pmem/

Then, applications can use persistent memory and create files in the /mnt/pmem/ directory, open the
files, and use the mmap operation to map the files for direct access.

When creating partitions on a pmem device to be used for direct access, partitions must be aligned on
page boundaries. On the Intel 64 and AMD64 architecture, at least 4KiB alignment for the start and end
of the partition, but 2MiB is the preferred alignment. By default, the parted tool aligns partitions on 1MiB
boundaries. For the first partition, specify 2MiB as the start of the partition. If the size of the partition is a
multiple of 2MiB, all other partitions are also aligned.

28.4. CONFIGURING PERSISTENT MEMORY FOR USE IN DEVICE DAX


MODE
Device DAX provides a means for applications to directly access storage, without the involvement of a
file system. The benefit of device DAX is that it provides a guaranteed fault granularity, which can be
configured using the --align option with the ndctl utilty:

# ndctl create-namespace --force --reconfig=namespace0.0 --mode=dax --


align=2M

The given command ensures that the operating system would fault in 2MiB pages at a time. For the Intel
64 and AMD64 architecture, the following fault granularities are supported:

4KiB

2MiB

1GiB

Device DAX nodes (/dev/daxN.M) only supports the following system call:

open()

close()

mmap()

fallocate()

read() and write() variants are not supported because the use case is tied to persistent memory
programming.

256
CHAPTER 28. PERSISTENT MEMORY: NVDIMMS

28.5. TROUBLESHOOTING
Some NVDIMMs support Self-Monitoring, Analysis and Reporting Technology (S.M.A.R.T.) interfaces for
retrieving health information.

NOTE

On some systems, the acpi_ipmi driver must be loaded to retrieve health information
using the following command:

# modprobe acpi_ipmi

To access the health information, use the following command:

# ndctl list --dimms --health --dimm=nmem0


[
{
{
"dev":"nmem0",
"id":"802c-01-1513-b3009166",
"handle":1,
"phys_id":22,
"health":
{
"health_state":"ok",
"temperature_celsius":25.000000,
"spares_percentage":99,
"alarm_temperature":false,
"alarm_spares":false,
"temperature_threshold":50.000000,
"spares_threshold":20,
"life_used_percentage":1,
"shutdown_state":"clean"
}
}
}
]

257
Storage Administration Guide

PART III. DATA DEDUPLICATION AND COMPRESSION WITH


VDO
This part describes how to provide deduplicated block storage capabilities to existing storage
management applications by enabling them to utilize Virtual Data Optimizer (VDO).

258
CHAPTER 29. VDO INTEGRATION

CHAPTER 29. VDO INTEGRATION

29.1. THEORETICAL OVERVIEW OF VDO


Virtual Data Optimizer (VDO) is a block virtualization technology that allows you to easily create
compressed and deduplicated pools of block storage.

Deduplication is a technique for reducing the consumption of storage resources by eliminating


multiple copies of duplicate blocks.

Instead of writing the same data more than once, VDO detects each duplicate block and records
it as a reference to the original block. VDO maintains a mapping from logical block addresses,
which are used by the storage layer above VDO, to physical block addresses, which are used
by the storage layer under VDO.

After deduplication, multiple logical block addresses may be mapped to the same physical block
address; these are called shared blocks. Block sharing is invisible to users of the storage, who
read and write blocks as they would if VDO were not present. When a shared block is
overwritten, a new physical block is allocated for storing the new block data to ensure that other
logical block addresses that are mapped to the shared physical block are not modified.

Compression is a data-reduction technique that works well with file formats that do not
necessarily exhibit block-level redundancy, such as log files and databases. See Section 29.4.8,
“Using Compression” for more detail.

The VDO solution consists of the following components:

kvdo
A kernel module that loads into the Linux Device Mapper layer to provide a deduplicated,
compressed, and thinly provisioned block storage volume

uds
A kernel module that communicates with the Universal Deduplication Service (UDS) index on the
volume and analyzes data for duplicates.

Command line tools


For configuring and managing optimized storage.

29.1.1. The UDS Kernel Module (uds)

The UDS index provides the foundation of the VDO product. For each new piece of data, it quickly
determines if that piece is identical to any previously stored piece of data. If the index finds match, the
storage system can then internally reference the existing item to avoid storing the same information more
than once.

The UDS index runs inside the kernel as the uds kernel module.

29.1.2. The VDO Kernel Module (kvdo)

The kvdo Linux kernel module provides block-layer deduplication services within the Linux Device
Mapper layer. In the Linux kernel, Device Mapper serves as a generic framework for managing pools of
block storage, allowing the insertion of block-processing modules into the storage stack between the

259
Storage Administration Guide

kernel's block interface and the actual storage device drivers.

The kvdo module is exposed as a block device that can be accessed directly for block storage or
presented through one of the many available Linux file systems, such as XFS or ext4. When kvdo
receives a request to read a (logical) block of data from a VDO volume, it maps the requested logical
block to the underlying physical block and then reads and returns the requested data.

When kvdo receives a request to write a block of data to a VDO volume, it first checks whether it is a
DISCARD or TRIM request or whether the data is uniformly zero. If either of these conditions holds, kvdo
updates its block map and acknowledges the request. Otherwise, a physical block is allocated for use by
the request.

Overview of VDO Write Policies

If the kvdo module is operating in synchronous mode:

1. It temporarily writes the data in the request to the allocated block and then acknowledges the
request.

2. Once the acknowledgment is complete, an attempt is made to deduplicate the block by


computing a MurmurHash-3 signature of the block data, which is sent to the VDO index.

3. If the VDO index contains an entry for a block with the same signature, kvdo reads the indicated
block and does a byte-by-byte comparison of the two blocks to verify that they are identical.

4. If they are indeed identical, then kvdo updates its block map so that the logical block points to
the corresponding physical block and releases the allocated physical block.

5. If the VDO index did not contain an entry for the signature of the block being written, or the
indicated block does not actually contain the same data, kvdo updates its block map to make the
temporary physical block permanent.

If kvdo is operating in asynchronous mode:

1. Instead of writing the data, it will immediately acknowledge the request.

2. It will then attempt to deduplicate the block in same manner as described above.

3. If the block turns out to be a duplicate, kvdo will update its block map and release the allocated
block. Otherwise, it will write the data in the request to the allocated block and update the block
map to make the physical block permanent.

29.1.3. VDO Volume


VDO uses a block device as a backing store, which can include an aggregation of physical storage
consisting of one or more disks, partitions, or even flat files. When a VDO volume is created by a
storage management tool, VDO reserves space from the volume for both a UDS index and the VDO
volume, which interact together to provide deduplicated block storage to users and applications.
Figure 29.1, “VDO Disk Organization” illustrates how these pieces fit together.

260
CHAPTER 29. VDO INTEGRATION

Figure 29.1. VDO Disk Organization

Slabs

The physical storage of the VDO volume is divided into a number of slabs, each of which is a contiguous
region of the physical space. All of the slabs for a given volume will be of the same size, which may be
any power of 2 multiple of 128 MB up to 32 GB.

The default slab size is 2 GB in order to facilitate evaluating VDO on smaller test systems. A single VDO
volume may have up to 8096 slabs. Therefore, in the default configuration with 2 GB slabs, the
maximum allowed physical storage is 16 TB. When using 32 GB slabs, the maximum allowed physical
storage is 256 TB.

For a recommendation on what slab size to choose depending on your physical volume size, see
Table 29.1, “Recommended VDO Slab Sizes by Physical Volume Size”.

At least one entire slab will be reserved by VDO for metadata, and therefore cannot be used for storing
user data.

The size of a slab can be controlled by providing the --vdoSlabSize=megabytes option to the vdo
create command.

Table 29.1. Recommended VDO Slab Sizes by Physical Volume Size

Physical 10–99 GB 100 GB – 2–10 TB 11–50 TB 51–100 TB 101–256 TB


Volume 1 TB
Size

Slab Size 1 GB 2 GB 32 GB 32 GB 32 GB 32 GB

Physical Size and Available Physical Size

Both physical size and available physical size describe the amount of disk space on the block device
that VDO can utilize:

Physical size is the same size as the underlying block device. VDO uses this storage for:

User data, which might be deduplicated and compressed

VDO metadata, such as the UDS index

Available physical size is the portion of the physical size that VDO is able to use for user data.

It is equivalent to the physical size minus the size of the metadata, minus the remainder after
dividing the volume into slabs by the given slab size.

261
Storage Administration Guide

For examples of how much storage VDO metadata require on block devices of different sizes, see
Section 29.2.3, “Examples of VDO System Requirements by Physical Volume Size”.

Logical Size

If the --vdoLogicalSize option is not specified, the logical volume size defaults to the available
physical volume size. Note that, in Figure 29.1, “VDO Disk Organization”, the VDO deduplicated storage
target sits completely on top of the block device, meaning the physical size of the VDO volume is the
same size as the underlying block device.

VDO currently supports any logical size up to 254 times the size of the physical volume with an absolute
maximum logical size of 4PB.

29.1.4. Command Line Tools


VDO includes the following command line tools for configuration and management:

vdo
Creates, configures, and controls VDO volumes

vdostats
Provides utilization and performance statistics

29.2. SYSTEM REQUIREMENTS


Processor Architectures
One or more processors implementing the Intel 64 instruction set are required: that is, a processor of the
AMD64 or Intel 64 architecture.

RAM
Each VDO volume has two distinct memory requirements:

The VDO module requires 370 MB plus an additional 268 MB per each 1 TB of physical storage
managed.

The Universal Deduplication Service (UDS) index requires a minimum of 250 MB of DRAM,
which is also the default amount that deduplication uses. For details on the memory usage of
UDS, see Section 29.2.1, “UDS Index Memory Requirements”.

Storage
A single VDO volume can be configured to use up to 256 TB of physical storage. See Section 29.2.2,
“VDO Storage Requirements” for the calculations to determine the usable size of a VDO-managed
volume from the physical size of the storage pool the VDO is given.

Additional System Software


VDO depends on the following software:

LVM

Python 2.7

The yum package manager will install all necessary software dependencies automatically.

262
CHAPTER 29. VDO INTEGRATION

Placement of VDO in the Storage Stack


As a general rule, you should place certain storage layers under VDO and others on top of VDO:

Under VDO: DM-Multipath, DM-Crypt, and software RAID (LVM or mdraid).

On top of VDO: LVM cache, LVM Logical Volumes, LVM snapshots, and LVM Thin Provisioning.

The following configurations are not supported:

VDO on top of VDO volumes: storage → VDO → LVM → VDO

VDO on top of LVM Snapshots

VDO on top of LVM Cache

VDO on top of the loopback device

VDO on top of LVM Thin Provisioning

Encrypted volumes on top of VDO: storage → VDO → DM-Crypt

Partitions on a VDO volume: fdisk, parted, and similar partitions

RAID (LVM, MD, or any other type) on top of a VDO volume

IMPORTANT

VDO supports two write modes: sync and async. When VDO is in sync mode, writes to
the VDO device are acknowledged when the underlying storage has written the data
permanently. When VDO is in async mode, writes are acknowledged before being
written to persistent storage.

It is critical to set the VDO write policy to match the behavior of the underlying storage. By
default, VDO write policy is set to the auto option, which selects the appropriate policy
automatically.

For more information, see Section 29.4.2, “Selecting VDO Write Modes”.

29.2.1. UDS Index Memory Requirements


The UDS index consists of two parts:

A compact representation is used in memory that contains at most one entry per unique block.

An on-disk component which records the associated block names presented to the index as they
occur, in order.

UDS uses an average of 4 bytes per entry in memory (including cache).

The on-disk component maintains a bounded history of data passed to UDS. UDS provides deduplication
advice for data that falls within this deduplication window, containing the names of the most recently
seen blocks. The deduplication window allows UDS to index data as efficiently as possible while limiting
the amount of memory required to index large data repositories. Despite the bounded nature of the
deduplication window, most datasets which have high levels of deduplication also exhibit a high degree
of temporal locality — in other words, most deduplication occurs among sets of blocks that were written
at about the same time. Furthermore, in general, data being written is more likely to duplicate data that

263
Storage Administration Guide

was recently written than data that was written a long time ago. Therefore, for a given workload over a
given time interval, deduplication rates will often be the same whether UDS indexes only the most recent
data or all the data.

Because duplicate data tends to exhibit temporal locality, it is rarely necessary to index every block in
the storage system. Were this not so, the cost of index memory would outstrip the savings of reduced
storage costs from deduplication. Index size requirements are more closely related to the rate of data
ingestion. For example, consider a storage system with 100 TB of total capacity but with an ingestion
rate of 1 TB per week. With a deduplication window of 4 TB, UDS can detect most redundancy among
the data written within the last month.

UDS's Sparse Indexing feature (the recommended mode for VDO) further exploits temporal locality by
attempting to retain only the most relevant index entries in memory. UDS can maintain a deduplication
window that is ten times larger while using the same amount of memory. While the sparse index provides
the greatest coverage, the dense index provides more advice. For most workloads, given the same
amount of memory, the difference in deduplication rates between dense and sparse indexes is
negligible.

The memory required for the index is determined by the desired size of the deduplication window:

For a dense index, UDS will provide a deduplication window of 1 TB per 1 GB of RAM. A 1 GB
index is generally sufficient for storage systems of up to 4 TB.

For a sparse index, UDS will provide a deduplication window of 10 TB per 1 GB of RAM. A 1 GB
sparse index is generally sufficient for up to 40 TB of physical storage.

For concrete examples of UDS Index memory requirements, see Section 29.2.3, “Examples of VDO
System Requirements by Physical Volume Size”

29.2.2. VDO Storage Requirements


VDO requires storage both for VDO metadata and for the actual UDS deduplication index:

VDO writes two types of metadata to its underlying physical storage:

The first type scales with the physical size of the VDO volume and uses approximately 1 MB
for each 4 GB of physical storage plus an additional 1 MB per slab.

The second type scales with the logical size of the VDO volume and consumes
approximately 1.25 MB for each 1 GB of logical storage, rounded up to the nearest slab.

See Section 29.1.3, “VDO Volume” for a description of slabs.

The UDS index is stored within the VDO volume group and is managed by the associated VDO
instance. The amount of storage required depends on the type of index and the amount of RAM
allocated to the index. For each 1 GB of RAM, a dense UDS index will use 17 GB of storage,
and a sparse UDS index will use 170 GB of storage.

For concrete examples of VDO storage requirements, see Section 29.2.3, “Examples of VDO System
Requirements by Physical Volume Size”

29.2.3. Examples of VDO System Requirements by Physical Volume Size


The following tables provide approximate system requirements of VDO based on the size of the
underlying physical volume. Each table lists requirements appropriate to the intended deployment, such
as primary storage or backup storage.

264
CHAPTER 29. VDO INTEGRATION

The exact numbers depend on your configuration of the VDO volume.

Primary Storage Deployment

In the primary storage case, the UDS index is between 0.01% to 25% the size of the physical volume.

Table 29.2. VDO Storage and Memory Requirements for Primary Storage

Physical 10 GB – 1–TB 2–10 TB 11–50 TB 51–100 TB 101–256 TB


Volume Size

RAM Usage 250 MB Dense: 1 GB 2 GB 3 GB 12 GB

Sparse:
250 MB

Disk Usage 2.5 GB Dense: 10 GB 170 GB 255 GB 1020 GB

Sparse: 22 GB

Index Type Dense Dense or Sparse Sparse Sparse


Sparse

Backup Storage Deployment

In the backup storage case, the UDS index covers the size of the backup set but is not bigger than the
physical volume. If you expect the backup set or the physical size to grow in the future, factor this into
the index size.

Table 29.3. VDO Storage and Memory Requirements for Backup Storage

Physical 10 GB – 1 TB 2–10 TB 11–50 TB 51–100 TB 101–256 TB


Volume Size

RAM Usage 250 MB 2 GB 10 GB 20 GB 26 GB

Disk Usage 2.5 GB 170 GB 850 GB 1700 GB 3400 GB

Index Type Dense Sparse Sparse Sparse Sparse

29.3. GETTING STARTED WITH VDO

29.3.1. Introduction
Virtual Data Optimizer (VDO) provides inline data reduction for Linux in the form of deduplication,
compression, and thin provisioning. When you set up a VDO volume, you specify a block device on
which to construct your VDO volume and the amount of logical storage you plan to present.

When hosting active VMs or containers, Red Hat recommends provisioning storage at a 10:1
logical to physical ratio: that is, if you are utilizing 1 TB of physical storage, you would present it
as 10 TB of logical storage.

265
Storage Administration Guide

For object storage, such as the type provided by Ceph, Red Hat recommends using a 3:1 logical
to physical ratio: that is, 1 TB of physical storage would present as 3 TB logical storage.

In either case, you can simply put a file system on top of the logical device presented by VDO and then
use it directly or as part of a distributed cloud storage architecture.

This chapter describes the following use cases of VDO deployment:

the direct-attached use case for virtualization servers, such as those built using Red Hat
Virtualization, and

the cloud storage use case for object-based distributed storage clusters, such as those built
using Ceph Storage.

NOTE

VDO deployment with Ceph is currently not supported.

This chapter provides examples for configuring VDO for use with a standard Linux file system that can
be easily deployed for either use case; see the diagrams in Section 29.3.5, “Deployment Examples”.

29.3.2. Installing VDO


VDO is deployed using the following RPM packages:

vdo

kmod-kvdo

To install VDO, use the yum package manager to install the RPM packages:

# yum install vdo kmod-kvdo

29.3.3. Creating a VDO Volume


Create a VDO volume for your block device. Note that multiple VDO volumes can be created for
separate devices on the same machine. If you choose this approach, you must supply a different name
and device for each instance of VDO on the system.

In all the following steps, replace vdo_name with the identifier you want to use for your VDO volume; for
example, vdo1.

NOTE

Before creating volumes, VDO uses LVM utilities such as, pvcreate --test to validate
block device.

1. Create the VDO volume using the VDO Manager:

# vdo create \
--name=vdo_name \
--device=block_device \
--vdoLogicalSize=logical_size

266
CHAPTER 29. VDO INTEGRATION

Replace block_device with the persistent name of the block device where you want to create
the VDO volume. For example, /dev/disk/by-id/scsi-
3600508b1001c264ad2af21e903ad031f.

IMPORTANT

Use a persistent device name. If you use a non-persistent device name, then
VDO might fail to start properly in the future if the device name changes.

For more information on persistent names, see Section 25.7, “Persistent


Naming”.

Replace logical_size with the amount of logical storage that the VDO volume should present:

For active VMs or container storage, use logical size that is ten times the physical size
of your block device. For example, if your block device is 1 TB in size, use 10T here.

For object storage, use logical size that is three times the physical size of your block
device. For example, if your block device is 1 TB in size, use 3T here.

Example 29.1. Creating VDO for Container Storage

For example, to create a VDO volume for container storage on a 1 TB block device, you
might use:

# vdo create \
--name=vdo1 \
--device=/dev/disk/by-id/scsi-
3600508b1001c264ad2af21e903ad031f \
--vdoLogicalSize=10T

When a VDO volume is created, VDO adds an entry to the /etc/vdoconf.yml configuration
file. The vdo.service systemd unit then uses the entry to start the volume by default.

IMPORTANT

If a failure occurs when creating the VDO volume, remove the volume to clean up.
See Section 29.4.3.1, “Removing an Unsuccessfully Created Volume” for details.

2. Create a file system:

For the XFS file system:

# mkfs.xfs -K /dev/mapper/vdo_name

For the ext4 file system:

# mkfs.ext4 -E nodiscard /dev/mapper/vdo_name

3. Mount the file system:

267
Storage Administration Guide

# mkdir -m 1777 /mnt/vdo_name


# mount /dev/mapper/vdo_name /mnt/vdo_name

4. To configure the file system to mount automatically, use either the /etc/fstab file or a
systemd mount unit:

If you decide to use the /etc/fstab configuration file, add one of the following lines to the
file:

For the XFS file system:

/dev/mapper/vdo_name /mnt/vdo_name xfs defaults,x-


systemd.requires=vdo.service 0 0

For the ext4 file system:

/dev/mapper/vdo_name /mnt/vdo_name ext4 defaults,x-


systemd.requires=vdo.service 0 0

Alternatively, if you decide to use a systemd unit, create a systemd mount unit file with the
appropriate filename. For the mount point of your VDO volume, create the
/etc/systemd/system/mnt-vdo_name.mount file with the following content:

[Unit]
Description = VDO unit file to mount file system
name = vdo_name.mount
Requires = vdo.service
After = multi-user.target
Conflicts = umount.target

[Mount]
What = /dev/mapper/vdo_name
Where = /mnt/vdo_name
Type = xfs

[Install]
WantedBy = multi-user.target

An example systemd unit file is also installed at


/usr/share/doc/vdo/examples/systemd/VDO.mount.example.

5. Enable the discard feature for the file system on your VDO device. Both batch and online
operations work with VDO.

For information on how to set up the discard feature, see Section 2.4, “Discard Unused
Blocks”.

29.3.4. Monitoring VDO


Because VDO is thin provisioned, the file system and applications will only see the logical space in use
and will not be aware of the actual physical space available. Scripting should be used to monitor the
actual available space and generate an alert if use exceeds a threshold: for example, when the file
system is 80% full.

268
CHAPTER 29. VDO INTEGRATION

VDO space usage and efficiency can be monitored using the vdostats utility:

# vdostats --human-readable

Device 1K-blocks Used Available Use%


Space saving%
/dev/mapper/node1osd1 926.5G 21.0G 905.5G 2% 73%
/dev/mapper/node1osd2 926.5G 28.2G 898.3G 3% 64%

29.3.5. Deployment Examples


The following examples illustrate how VDO might be used in KVM and other deployments.

VDO Deployment with KVM

To see how VDO can be deployed successfully on a KVM server configured with Direct Attached
Storage, see Figure 29.2, “VDO Deployment with KVM”.

Figure 29.2. VDO Deployment with KVM

More Deployment Scenarios

For more information on VDO deployment, see Section 29.5, “Deployment Scenarios”.

29.4. ADMINISTERING VDO

29.4.1. Starting or Stopping VDO


To start a given VDO volume, or all VDO volumes, and the associated UDS index(es), storage
management utilities should invoke one of these commands:

# vdo start --name=my_vdo


# vdo start --all

The VDO systemd unit is installed and enabled by default when the vdo package is installed. This unit
automatically runs the vdo start --all command at system startup to bring up all activated VDO
volumes. See Section 29.4.6, “Automatically Starting VDO Volumes at System Boot” for more

269
Storage Administration Guide

information.

To stop a given VDO volume, or all VDO volumes, and the associated UDS index(es), use one of these
commands:

# vdo stop --name=my_vdo


# vdo stop --all

If restarted after an unclean shutdown, VDO will perform a rebuild to verify the consistency of its
metadata and will repair it if necessary. Rebuilds are automatic and do not require user intervention. See
Section 29.4.5, “Recovering a VDO Volume After an Unclean Shutdown” for more information on the
rebuild process.

VDO might rebuild different writes depending on the write mode:

In synchronous mode, all writes that were acknowledged by VDO prior to the shutdown will be
rebuilt.

In asynchronous mode, all writes that were acknowledged prior to the last acknowledged flush
request will be rebuilt.

In either mode, some writes that were either unacknowledged or not followed by a flush may also be
rebuilt.

For details on VDO write modes, see Section 29.4.2, “Selecting VDO Write Modes”.

29.4.2. Selecting VDO Write Modes


VDO supports three write modes, sync, async, and auto:

When VDO is in sync mode, the layers above it assume that a write command writes data to
persistent storage. As a result, it is not necessary for the file system or application, for example,
to issue FLUSH or Force Unit Access (FUA) requests to cause the data to become persistent at
critical points.

VDO must be set to sync mode only when the underlying storage guarantees that data is written
to persistent storage when the write command completes. That is, the storage must either have
no volatile write cache, or have a write through cache.

When VDO is in async mode, the data is not guaranteed to be written to persistent storage
when a write command is acknowledged. The file system or application must issue FLUSH or
FUA requests to ensure data persistence at critical points in each transaction.

VDO must be set to async mode if the underlying storage does not guarantee that data is
written to persistent storage when the write command completes; that is, when the storage has a
volatile write back cache.

For information on how to find out if a device uses volatile cache or not, see the section called
“Checking for a Volatile Cache”.

The auto mode automatically selects sync or async based on the characteristics of each
device. This is the default option.

For a more detailed theoretical overview of how write policies operate, see the section called “Overview
of VDO Write Policies”.

270
CHAPTER 29. VDO INTEGRATION

To set a write policy, use the --writePolicy option. This can be specified either when creating a VDO
volume as in Section 29.3.3, “Creating a VDO Volume” or when modifying an existing VDO volume with
the changeWritePolicy subcommand:

# vdo changeWritePolicy --writePolicy=sync|async|auto --name=vdo_name

IMPORTANT

Using the incorrect write policy might result in data loss on power failure.

Checking for a Volatile Cache

To see whether a device has a writeback cache, read the


/sys/block/block_device/device/scsi_disk/identifier/cache_type sysfs file. For
example:

Device sda indicates that it has a writeback cache:

$ cat '/sys/block/sda/device/scsi_disk/7:0:0:0/cache_type'

write back

Device sdb indicates that it does not have a writeback cache:

$ cat '/sys/block/sdb/device/scsi_disk/1:2:0:0/cache_type'

None

Additionally, in the kernel boot log, you can find whether the above mentioned devices have a write
cache or not:

sd 7:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't


support DPO or FUA
sd 1:2:0:0: [sdb] Write cache: disabled, read cache: disabled, supports
DPO and FUA

See the Viewing and Managing Log Files chapter in the System Administrator's Guide for more
information on reading the system log.

In these examples, use the following write policies for VDO:

async mode for the sda device

sync mode for the sdb device

NOTE

You should configure VDO to use the sync write policy if the cache_type value is none
or write through.

29.4.3. Removing VDO Volumes

271
Storage Administration Guide

A VDO volume can be removed from the system by running:

# vdo remove --name=my_vdo

Prior to removing a VDO volume, unmount file systems and stop applications that are using the storage.
The vdo remove command removes the VDO volume and its associated UDS index, as well as logical
volumes where they reside.

29.4.3.1. Removing an Unsuccessfully Created Volume

If a failure occurs when the vdo utility is creating a VDO volume, the volume is left in an intermediate
state. This might happen when, for example, the system crashes, power fails, or the administrator
interrupts a running vdo create command.

To clean up from this situation, remove the unsuccessfully created volume with the --force option:

# vdo remove --force --name=my_vdo

The --force option is required because the administrator might have caused a conflict by changing the
system configuration since the volume was unsuccessfully created. Without the --force option, the
vdo remove command fails with the following message:

[...]
A previous operation failed.
Recovery from the failure either failed or was interrupted.
Add '--force' to 'remove' to perform the following cleanup.
Steps to clean up VDO my_vdo:
umount -f /dev/mapper/my_vdo
udevadm settle
dmsetup remove my_vdo
vdo: ERROR - VDO volume my_vdo previous operation (create) is incomplete

29.4.4. Configuring the UDS Index


VDO uses a high-performance deduplication index called UDS to detect duplicate blocks of data as they
are being stored. The deduplication window is the number of previously written blocks which the index
remembers. The size of the deduplication window is configurable. For a given window size, the index will
requires a specific amount of RAM and a specific amount of disk space. The size of the window is
usually determined by specifying the size of the index memory using the --indexMem=size option. The
amount of disk space to use will then be determined automatically.

In general, Red Hat recommends using a sparse UDS index for all production use cases. This is an
extremely efficient indexing data structure, requiring approximately one-tenth of a byte of DRAM per
block in its deduplication window. On disk, it requires approximately 72 bytes of disk space per block.
The minimum configuration of this index uses 256 MB of DRAM and approximately 25 GB of space on
disk. To use this configuration, specify the --sparseIndex=enabled --indexMem=0.25 options to
the vdo create command. This configuration results in a deduplication window of 2.5 TB (meaning it
will remember a history of 2.5 TB). For most use cases, a deduplication window of 2.5 TB is appropriate
for deduplicating storage pools that are up to 10 TB in size.

The default configuration of the index, however, is to use a dense index. This index is considerably less
efficient (by a factor of 10) in DRAM, but it has much lower (also by a factor of 10) minimum required
disk space, making it more convenient for evaluation in constrained environments.

272
CHAPTER 29. VDO INTEGRATION

In general, a deduplication window which is one quarter of the physical size of a VDO volume is a
recommended configuration. However, this is not an actual requirement. Even small deduplication
windows (compared to the amount of physical storage) can find significant amounts of duplicate data in
many use cases. Larger windows may also be used, but it in most cases, there will be little additional
benefit to doing so.

Speak with your Red Hat Technical Account Manager representative for additional guidelines on tuning
this important system parameter.

29.4.5. Recovering a VDO Volume After an Unclean Shutdown


If a volume is restarted without having been shut down cleanly, VDO will need to rebuild a portion of its
metadata to continue operating, which occurs automatically when the volume is started. (Also see
Section 29.4.5.2, “Forcing a Rebuild” to invoke this process on a volume that was cleanly shut down.)

Data recovery depends on the write policy of the device:

If VDO was running on synchronous storage and write policy was set to sync, then all data
written to the volume will be fully recovered.

If the write policy was async, then some writes may not be recovered if they were not made
durable by sending VDO a FLUSH command, or a write I/O tagged with the FUA flag (force unit
access). This is accomplished from user mode by invoking a data integrity operation like fsync,
fdatasync, sync, or umount.

29.4.5.1. Online Recovery

In the majority of cases, most of the work of rebuilding an unclean VDO volume can be done after the
VDO volume has come back online and while it is servicing read and write requests. Initially, the amount
of space available for write requests may be limited. As more of the volume's metadata is recovered,
more free space may become available. Furthermore, data written while the VDO is recovering may fail
to deduplicate against data written before the crash if that data is in a portion of the volume which has not
yet been recovered. Data may be compressed while the volume is being recovered. Previously
compressed blocks may still be read or overwritten.

During an online recovery, a number of statistics will be unavailable: for example, blocks in use and
blocks free. These statistics will become available once the rebuild is complete.

29.4.5.2. Forcing a Rebuild

VDO can recover from most hardware and software errors. If a VDO volume cannot be recovered
successfully, it is placed in a read-only mode that persists across volume restarts. Once a volume is in
read-only mode, there is no guarantee that data has not been lost or corrupted. In such cases, Red Hat
recommends copying the data out of the read-only volume and possibly restoring the volume from
backup. (The operating mode attribute of vdostats indicates whether a VDO volume is in read-only
mode.)

If the risk of data corruption is acceptable, it is possible to force an offline rebuild of the VDO volume
metadata so the volume can be brought back online and made available. Again, the integrity of the rebuilt
data cannot be guaranteed.

To force a rebuild of a read-only VDO volume, first stop the volume if it is running:

# vdo stop --name=my_vdo

273
Storage Administration Guide

Then restart the volume using the --forceRebuild option:

# vdo start --name=my_vdo --forceRebuild

29.4.6. Automatically Starting VDO Volumes at System Boot


During system boot, the vdo systemd unit automatically starts all VDO devices that are configured as
activated.

To prevent certain existing volumes from being started automatically, deactivate those volumes by
running either of these commands:

To deactivate a specific volume:

# vdo deactivate --name=my_vdo

To deactivate all volumes:

# vdo deactivate --all

Conversely, to activate volumes, use one of these commands:

To activate a specific volume:

# vdo activate --name=my_vdo

To activate all volumes:

# vdo activate --all

You can also create a VDO volume that does not start automatically by adding the --
activate=disabled option to the vdo create command.

For systems that place LVM volumes on top of VDO volumes as well as beneath them (for example,
Figure 29.5, “Deduplicated Unified Storage”), it is vital to start services in the right order:

1. The lower layer of LVM must be started first (in most systems, starting this layer is configured
automatically when the LVM2 package is installed).

2. The vdo systemd unit must then be started.

3. Finally, additional scripts must be run in order to start LVM volumes or other services on top of
the now running VDO volumes.

29.4.7. Disabling and Re-enabling Deduplication


In some instances, it may be desirable to temporarily disable deduplication of data being written to a
VDO volume while still retaining the ability to read to and write from the volume. While disabling
deduplication will prevent subsequent writes from being deduplicated, data which was already
deduplicated will remain so.

To stop deduplication on a VDO volume, use the following command:

274
CHAPTER 29. VDO INTEGRATION

# vdo disableDeduplication --name=my_vdo

This stops the associated UDS index and informs the VDO volume that deduplication is no
longer active.

To restart deduplication on a VDO volume, use the following command:

# vdo enableDeduplication --name=my_vdo

This restarts the associated UDS index and informs the VDO volume that deduplication is active
again.

You can also disable deduplication when creating a new VDO volume by adding the --
deduplication=disabled option to the vdo create command.

29.4.8. Using Compression

29.4.8.1. Introduction

In addition to block-level deduplication, VDO also provides inline block-level compression using the
HIOPS Compression™ technology. While deduplication is the optimal solution for virtual machine
environments and backup applications, compression works very well with structured and unstructured file
formats that do not typically exhibit block-level redundancy, such as log files and databases.

Compression operates on blocks that have not been identified as duplicates. When unique data is seen
for the first time, it is compressed. Subsequent copies of data that have already been stored are
deduplicated without requiring an additional compression step. The compression feature is based on a
parallelized packaging algorithm that enables it to handle many compression operations at once. After
first storing the block and responding to the requestor, a best-fit packing algorithm finds multiple blocks
that, when compressed, can fit into a single physical block. After it is determined that a particular
physical block is unlikely to hold additional compressed blocks, it is written to storage and the
uncompressed blocks are freed and reused. By performing the compression and packaging operations
after having already responded to the requestor, using compression imposes a minimal latency penalty.

29.4.8.2. Enabling and Disabling Compression

VDO volume compression is on by default.

When creating a volume, you can disable compression by adding the --compression=disabled
option to the vdo create command.

Compression can be stopped on an existing VDO volume if necessary to maximize performance or to


speed processing of data that is unlikely to compress.

To stop compression on a VDO volume, use the following command:

# vdo disableCompression --name=my_vdo

To start it again, use the following command:

# vdo enableCompression --name=my_vdo

275
Storage Administration Guide

29.4.9. Managing Free Space


Because VDO is a thinly provisioned block storage target, the amount of physical space VDO uses may
differ from the size of the volume presented to users of the storage. Integrators and systems
administrators can exploit this disparity to save on storage costs but must take care to avoid
unexpectedly running out of storage space if the data written does not achieve the expected rate of
deduplication.

Whenever the number of logical blocks (virtual storage) exceeds the number of physical blocks (actual
storage), it becomes possible for file systems and applications to unexpectedly run out of space. For that
reason, storage systems using VDO must provide storage administrators with a way of monitoring the
size of the VDO's free pool. The size of this free pool may be determined by using the vdostats utility;
see Section 29.7.2, “vdostats” for details. The default output of this utility lists information for all running
VDO volumes in a format similar to the Linux df utility. For example:

Device 1K-blocks Used Available Use%


/dev/mapper/my_vdo 211812352 105906176 105906176 50%

If the size of VDO's free pool drops below a certain level, the storage administrator can take action by
deleting data (which will reclaim space whenever the deleted data is not duplicated), adding physical
storage, or even deleting LUNs.

Reclaiming Space on File Systems

VDO cannot reclaim space unless file systems communicate that blocks are free using DISCARD, TRIM,
or UNMAP commands. For file systems that do not use DISCARD, TRIM, or UNMAP, free space may be
manually reclaimed by storing a file consisting of binary zeros and then deleting that file.

File systems may generally be configured to issue DISCARD requests in one of two ways:

Realtime discard (also online discard or inline discard)


When realtime discard is enabled, file systems send REQ_DISCARD requests to the block layer
whenever a user deletes a file and frees space. VDO recieves these requests and returns space to its
free pool, assuming the block was not shared.

For file systems that support online discard, you can enable it by setting the discard option at mount
time.

Batch discard
Batch discard is a user-initiated operation that causes the file system to notify the block layer (VDO)
of any unused blocks. This is accomplished by sending the file system an ioctl request called
FITRIM.

You can use the fstrim utility (for example from cron) to send this ioctl to the file system.

For more information on the discard feature, see Section 2.4, “Discard Unused Blocks”.

Reclaiming Space Without a File System

It is also possible to manage free space when the storage is being used as a block storage target without
a file system. For example, a single VDO volume can be carved up into multiple subvolumes by
installing the Logical Volume Manager (LVM) on top of it. Before deprovisioning a volume, the
blkdiscard command can be used in order to free the space previously used by that logical volume.

276
CHAPTER 29. VDO INTEGRATION

LVM supports the REQ_DISCARD command and will forward the requests to VDO at the appropriate
logical block addresses in order to free the space. If other volume managers are being used, they would
also need to support REQ_DISCARD, or equivalently, UNMAP for SCSI devices or TRIM for ATA devices.

Reclaiming Space on Fibre Channel or Ethernet Network

VDO volumes (or portions of volumes) can also be provisioned to hosts on a Fibre Channel storage
fabric or an Ethernet network using SCSI target frameworks such as LIO or SCST. SCSI initiators can
use the UNMAP command to free space on thinly provisioned storage targets, but the SCSI target
framework will need to be configured to advertise support for this command. This is typically done by
enabling thin provisioning on these volumes. Support for UNMAP can be verified on Linux-based SCSI
initiators by running the following command:

# sg_vpd --page=0xb0 /dev/device

In the output, verify that the "Maximum unmap LBA count" value is greater than zero.

29.4.10. Increasing Logical Volume Size


Management applications can increase the logical size of a VDO volume using the vdo growLogical
subcommand. Once the volume has been grown, the management should inform any devices or file
systems on top of the VDO volume of its new size. The volume may be grown as follows:

# vdo growLogical --name=my_vdo --vdoLogicalSize=new_logical_size

The use of this command allows storage administrators to initially create VDO volumes which have a
logical size small enough to be safe from running out of space. After some period of time, the actual rate
of data reduction can be evaluated, and if sufficient, the logical size of the VDO volume can be grown to
take advantage of the space savings.

29.4.11. Increasing Physical Volume Size


To increase the amount of physical storage available to a VDO volume:

1. Increase the size of the underlying device.

The exact procedure depends on the type of the device. For example, to resize an MBR
partition, use the fdisk utility as described in Section 13.5, “Resizing a Partition with fdisk”.

2. Use the growPhysical option to add the new physical storage space to the VDO volume:

# vdo growPhysical --name=my_vdo

It is not possible to shrink a VDO volume with this command.

29.4.12. Automating VDO with Ansible


You can use the Ansible tool to automate VDO deployment and administration. For details, see:

Ansible documentation: https://docs.ansible.com/

VDO Ansible module documentation:


https://docs.ansible.com/ansible/latest/modules/vdo_module.html

277
Storage Administration Guide

29.5. DEPLOYMENT SCENARIOS


VDO can be deployed in a variety of ways to provide deduplicated storage for both block and file access
and for both local and remote storage. Because VDO exposes its deduplicated storage as a standard
Linux block device, it can be used with standard file systems, iSCSI and FC target drivers, or as unified
storage.

29.5.1. iSCSI Target


As a simple example, the entirety of the VDO storage target can be exported as an iSCSI Target to
remote iSCSI initiators.

Figure 29.3. Deduplicated Block Storage Target

See http://linux-iscsi.org/ for more information on iSCSI Target.

29.5.2. File Systems


If file access is desired instead, file systems can be created on top of VDO and exposed to NFS or CIFS
users via either the Linux NFS server or Samba.

Figure 29.4. Deduplicated NAS

29.5.3. LVM
More feature-rich systems may make further use of LVM to provide multiple LUNs that are all backed by
the same deduplicated storage pool. In Figure 29.5, “Deduplicated Unified Storage”, the VDO target is
registered as a physical volume so that it can be managed by LVM. Multiple logical volumes (LV1 to
LV4) are created out of the deduplicated storage pool. In this way, VDO can support multiprotocol unified
block/file access to the underlying deduplicated storage pool.

278
CHAPTER 29. VDO INTEGRATION

Figure 29.5. Deduplicated Unified Storage

Deduplicated unified storage design allows for multiple file systems to collectively use the same
deduplication domain through the LVM tools. Also, file systems can take advantage of LVM snapshot,
copy-on-write, and shrink or grow features, all on top of VDO.

29.5.4. Encryption
Data security is critical today. More and more companies have internal policies regarding data
encryption. Linux Device Mapper mechanisms such as DM-Crypt are compatible with VDO. Encrypting
VDO volumes will help ensure data security, and any file systems above VDO still gain the deduplication
feature for disk optimization. Note that applying encryption above VDO results in little if any data
deduplication; encryption renders duplicate blocks different before VDO can deduplicate them.

Figure 29.6. Using VDO with Encryption

29.6. TUNING VDO

29.6.1. Introduction to VDO Tuning


As with tuning databases or other complex software, tuning VDO involves making trade-offs between
numerous system constraints, and some experimentation is required. The primary controls available for

279
Storage Administration Guide

tuning VDO are the number of threads assigned to different types of work, the CPU affinity settings for
those threads, and cache settings.

29.6.2. Background on VDO Architecture


The VDO kernel driver is multi-threaded to improve performance by amortizing processing costs across
multiple concurrent I/O requests. Rather than have one thread process an I/O request from start to finish,
it delegates different stages of work to one or more threads or groups of threads, with messages passed
between them as the I/O request makes its way through the pipeline. This way, one thread can serialize
all access to a global data structure without having to lock and unlock it each time an I/O operation is
processed. If the VDO driver is well-tuned, each time a thread completes a requested processing stage
there will usually be another request queued up for that same processing. Keeping these threads busy
reduces the overhead of context switching and scheduling, improving performance. Separate threads
are also used for parts of the operating system that can block, such as enqueueing I/O operations to the
underlying storage system or messages to UDS.

The various worker thread types used by VDO are:

Logical zone threads


The logical threads, with process names including the string kvdo:logQ, maintain the mapping
between the logical block numbers (LBNs) presented to the user of the VDO device and the physical
block numbers (PBNs) in the underlying storage system. They also implement locking such that two
I/O operations attempting to write to the same block will not be processed concurrently. Logical zone
threads are active during both read and write operations.

LBNs are divided into chunks (a block map page contains a bit over 3 MB of LBNs) and these chunks
are grouped into zones that are divided up among the threads.

Processing should be distributed fairly evenly across the threads, though some unlucky access
patterns may occasionally concentrate work in one thread or another. For example, frequent access
to LBNs within a given block map page will cause one of the logical threads to process all of those
operations.

The number of logical zone threads can be controlled using the --vdoLogicalThreads=thread
count option of the vdo command

Physical zone threads


Physical, or kvdo:physQ, threads manage data block allocation and maintain reference counts.
They are active during write operations.

Like LBNs, PBNs are divided into chunks called slabs, which are further divided into zones and
assigned to worker threads that distribute the processing load.

The number of physical zone threads can be controlled using the --


vdoPhysicalThreads=thread count option of the vdo command.

I/O submission threads


kvdo:bioQ threads submit block I/O (bio) operations from VDO to the storage system. They take I/O
requests enqueued by other VDO threads and pass them to the underlying device driver. These
threads may communicate with and update data structures associated with the device, or set up
requests for the device driver's kernel threads to process. Submitting I/O requests can block if the
underlying device's request queue is full, so this work is done by dedicated threads to avoid
processing delays.

280
CHAPTER 29. VDO INTEGRATION

If these threads are frequently shown in D state by ps or top utilities, then VDO is frequently keeping
the storage system busy with I/O requests. This is generally good if the storage system can service
multiple requests in parallel, as some SSDs can, or if the request processing is pipelined. If thread
CPU utilization is very low during these periods, it may be possible to reduce the number of I/O
submission threads.

CPU usage and memory contention are dependent on the device driver(s) beneath VDO. If CPU
utilization per I/O request increases as more threads are added then check for CPU, memory, or lock
contention in those device drivers.

The number of I/O submission threads can be controlled using the --vdoBioThreads=thread
count option of the vdo command.

CPU-processing threads
kvdo:cpuQ threads exist to perform any CPU-intensive work such as computing hash values or
compressing data blocks that do not block or require exclusive access to data structures associated
with other thread types.

The number of CPU-processing threads can be controlled using the --vdoCpuThreads=thread


count option of the vdo command.

I/O acknowledgement threads


The kvdo:ackQ threads issue the callbacks to whatever sits atop VDO (for example, the kernel page
cache, or application program threads doing direct I/O) to report completion of an I/O request. CPU
time requirements and memory contention will be dependent on this other kernel-level code.

The number of acknowledgement threads can be controlled using the --vdoAckThreads=thread


count option of the vdo command.

Non-scalable VDO kernel threads:

Deduplication thread
The kvdo:dedupeQ thread takes queued I/O requests and contacts UDS. Since the socket buffer
can fill up if the server cannot process requests quickly enough or if kernel memory is constrained by
other system activity, this work is done by a separate thread so if a thread should block, other VDO
processing can continue. There is also a timeout mechanism in place to skip an I/O request after a
long delay (several seconds).

Journal thread
The kvdo:journalQ thread updates the recovery journal and schedules journal blocks for writing. A
VDO device uses only one journal, so this work cannot be split across threads.

Packer thread
The kvdo:packerQ thread, active in the write path when compression is enabled, collects data
blocks compressed by the kvdo:cpuQ threads to minimize wasted space. There is one packer data
structure, and thus one packer thread, per VDO device.

29.6.3. Values to tune

29.6.3.1. CPU/memory

281
Storage Administration Guide

29.6.3.1.1. Logical, physical, cpu, ack thread counts

The logical, physical, cpu, and I/O acknowledgement work can be spread across multiple threads, the
number of which can be specified during initial configuration or later if the VDO device is restarted.

One core, or one thread, can do a finite amount of work during a given time. Having one thread compute
all data-block hash values, for example, would impose a hard limit on the number of data blocks that
could be processed per second. Dividing the work across multiple threads (and cores) relieves that
bottleneck.

As a thread or core approaches 100% usage, more work items will tend to queue up for processing.
While this may result in CPU having fewer idle cycles, queueing delays and latency for individual I/O
requests will typically increase. According to some queueing theory models, utilization levels above 70%
or 80% can lead to excessive delays that can be several times longer than the normal processing time.
Thus it may be helpful to distribute work further for a thread or core with 50% or higher utilization, even if
those threads or cores are not always busy.

In the opposite case, where a thread or CPU is very lightly loaded (and thus very often asleep), supplying
work for it to do is more likely to incur some additional cost. (A thread attempting to wake another thread
must acquire a global lock on the scheduler's data structures, and may potentially send an inter-
processor interrupt to transfer work to another core). As more cores are configured to run VDO threads, it
becomes less likely that a given piece of data will be cached as work is moved between threads or as
threads are moved between cores — so too much work distribution can also degrade performance.

The work performed by the logical, physical, and CPU threads per I/O request will vary based on the
type of workload, so systems should be tested with the different types of workloads they are expected to
service.

Write operations in sync mode involving successful deduplication will entail extra I/O operations (reading
the previously stored data block), some CPU cycles (comparing the new data block to confirm that they
match), and journal updates (remapping the LBN to the previously-stored data block's PBN) compared to
writes of new data. When duplication is detected in async mode, data write operations are avoided at the
cost of the read and compare operations described above; only one journal update can happen per write,
whether or not duplication is detected.

If compression is enabled, reads and writes of compressible data will require more processing by the
CPU threads.

Blocks containing all zero bytes (a zero block) are treated specially, as they commonly occur. A special
entry is used to represent such data in the block map, and the zero block is not written to or read from the
storage device. Thus, tests that write or read all-zero blocks may produce misleading results. The same
is true, to a lesser degree, of tests that write over zero blocks or uninitialized blocks (those that were
never written since the VDO device was created) because reference count updates done by the physical
threads are not required for zero or uninitialized blocks.

Acknowledging I/O operations is the only task that is not significantly affected by the type of work being
done or the data being operated upon, as one callback is issued per I/O operation.

29.6.3.1.2. CPU Affinity and NUMA

Accessing memory across NUMA node boundaries takes longer than accessing memory on the local
node. With Intel processors sharing the last-level cache between cores on a node, cache contention
between nodes is a much greater problem than cache contention within a node.

Tools such as top can not distinguish between CPU cycles that do work and cycles that are stalled.
These tools interpret cache contention and slow memory accesses as actual work. As a result, moving a
thread between nodes may appear to reduce the thread's apparent CPU utilization while increasing the

282
CHAPTER 29. VDO INTEGRATION

number of operations it performs per second.

While many of VDO's kernel threads maintain data structures that are accessed by only one thread, they
do frequently exchange messages about the I/O requests themselves. Contention may be high if VDO
threads are run on multiple nodes, or if threads are reassigned from one node to another by the
scheduler. If it is possible to run other VDO-related work (such as I/O submissions to VDO, or interrupt
processing for the storage device) on the same node as the VDO threads, contention may be further
reduced. If one node does not have sufficient cycles to run all VDO-related work, memory contention
should be considered when selecting threads to move onto other nodes.

If practical, collect VDO threads on one node using the taskset utility. If other VDO-related work can
also be run on the same node, that may further reduce contention. In that case, if one node lacks the
CPU power to keep up with processing demands then memory contention must be considered when
choosing threads to move onto other nodes. For example, if a storage device's driver has a significant
number of data structures to maintain, it may help to move both the device's interrupt handling and
VDO's I/O submissions (the bio threads that call the device's driver code) to another node. Keeping I/O
acknowledgment (ack threads) and higher-level I/O submission threads (user-mode threads doing direct
I/O, or the kernel's page cache flush thread) paired is also good practice.

29.6.3.1.3. Frequency throttling

If power consumption is not an issue, writing the string performance to the


/sys/devices/system/cpu/cpu*/cpufreq/scaling_governor files if they exist might produce
better results. If these sysfs nodes do not exist, Linux or the system's BIOS may provide other options
for configuring CPU frequency management.

Performance measurements are further complicated by CPUs that dynamically vary their frequencies
based on workload, because the time needed to accomplish a specific piece of work may vary due to
other work the CPU has been doing, even without task switching or cache contention.

29.6.3.2. Caching

29.6.3.2.1. Block Map Cache

VDO caches a number of block map pages for efficiency. The cache size defaults to 128 MB, but it can
be increased with the --blockMapCacheSize=megabytes option of the vdo command. Using a
larger cache may produce significant benefits for random-access workloads.

29.6.3.2.2. Read Cache

A second cache may be used for caching data blocks read from the storage system to verify VDO's
deduplication advice. If similar data blocks are seen within a short time span, the number of I/O
operations needed may be reduced.

The read cache also holds storage blocks containing compressed user data. If multiple compressible
blocks were written within a short period of time, their compressed versions may be located within the
same storage system block. Likewise, if they are read within a short time, caching may avoid the need
for additional reads from the storage system.

The vdo command's --readCache={enabled | disabled} option controls whether a read cache is
used. If enabled, the cache has a minimum size of 8 MB, but it can be increased with the --
readCacheSize=megabytes option. Managing the read cache incurs a slight overhead, so it may not
increase performance if the storage system is fast enough. The read cache is disabled by default.

283
Storage Administration Guide

29.6.3.3. Storage System I/O

29.6.3.3.1. Bio Threads

For generic hard drives in a RAID configuration, one or two bio threads may be sufficient for submitting
I/O operations. If the storage device driver requires its I/O submission threads to do significantly more
work (updating driver data structures or communicating with the device) such that one or two threads are
very busy and storage devices are often idle, the bio thread count can be increased to compensate.
However, depending on the driver implementation, raising the thread count too high may lead to cache or
spin lock contention. If device access timing is not uniform across all NUMA nodes, it may be helpful to
run bio threads on the node "closest" to the storage device controllers.

29.6.3.3.2. IRQ Handling

If a device driver does significant work in its interrupt handler and does not use a threaded IRQ handler,
it may prevent the scheduler from providing the best performance. CPU time spent servicing hardware
interrupts may look like normal VDO (or other) kernel thread execution in some ways. For example, if
hardware IRQ handling required 30% of a core's cycles, a busy kernel thread on the same core could
only use the remaining 70%. However, if the work queued up for that thread demanded 80% of the core's
cycles, the thread would never catch up, and the scheduler might simply leave that thread to run
impeded on that core instead of switching that thread to a less busy core.

Using such a device driver under a heavy VDO workload may require a large number of cycles to service
hardware interrupts (the %hi indicator in the header of the top display). In that case it may help to
assign IRQ handling to certain cores and adjust the CPU affinity of VDO kernel threads not to run on
those cores.

29.6.3.4. Maximum Discard Sectors

The maximum allowed size of DISCARD (TRIM) operations to a VDO device can be tuned via
/sys/kvdo/max_discard_sectors, based on system usage. The default is 8 sectors (that is, one 4
KB block). Larger sizes may be specified, though VDO will still process them in a loop, one block at a
time, ensuring that metadata updates for one discarded block are written to the journal and flushed to
disk before starting on the next block.

When using a VDO volume as a local file system, Red Hat testing found that a small discard size works
best, as the generic block-device code in the Linux kernel will break large discard requests into multiple
smaller ones and submit them in parallel. If there is low I/O activity on the device, VDO can process
many smaller requests concurrently and much more quickly than one large request.

If the VDO device is to be used as a SCSI target, the initiator and target software introduce additional
factors to consider. If the target SCSI software is SCST, it reads the maximum discard size and relays it
to the initiator. (Red Hat has not attempted to tune VDO configurations in conjunction with LIO SCSI
target code.)

Because the Linux SCSI initiator code allows only one discard operation at a time, discard requests that
exceed the maximum size would be broken into multiple smaller discards and sent, one at a time, to the
target system (and to VDO). So, in addition to VDO processing a number of small discard operations in
serial, the round-trip communication time between the two systems adds additional latency.

Setting a larger maximum discard size can reduce this communication overhead, though that larger
request is passed in its entirety to VDO and processed one 4 KB block at a time. While there is no per-
block communication delay, additional processing time for the larger block may cause the SCSI initiator
software to time out.

For SCSI target usage, Red Hat recommends configuring the maximum discard size to be moderately

284
CHAPTER 29. VDO INTEGRATION

large while still keeping the typical discard time well within the initiator's timeout setting. An extra round-
trip cost every few seconds, for example, should not significantly affect performance and SCSI initiators
with timeouts of 30 or 60 seconds should not time out.

29.6.4. Identifying Bottlenecks


There are several key factors that affect VDO performance, and many tools available to identify those
having the most impact.

Thread or CPU utilization above 70%, as seen in utilities such as top or ps, generally implies that too
much work is being concentrated in one thread or on one CPU. However, in some cases it could mean
that a VDO thread was scheduled to run on the CPU but no work actually happened; this scenario could
occur with excessive hardware interrupt handler processing, memory contention between cores or
NUMA nodes, or contention for a spin lock.

When using the top utility to examine system performance, Red Hat suggests running top -H to show
all process threads separately and then entering the 1 f j keys, followed by the Enter/Return key; the
top command then displays the load on individual CPU cores and identifies the CPU on which each
process or thread last ran. This information can provide the following insights:

If a core has low %id (idle) and %wa (waiting-for-I/O) values, it is being kept busy with work of
some kind.

If the %hi value for a core is very low, that core is doing normal processing work, which is being
load-balanced by the kernel scheduler. Adding more cores to that set may reduce the load as
long as it does not introduce NUMA contention.

If the %hi for a core is more than a few percent and only one thread is assigned to that core, and
%id and %wa are zero, the core is over-committed and the scheduler is not addressing the
situation. In this case the kernel thread or the device interrupt handling should be reassigned to
keep them on separate cores.

The perf utility can examine the performance counters of many CPUs. Red Hat suggests using the
perf top subcommand as a starting point to examine the work a thread or processor is doing. If, for
example, the bioQ threads are spending many cycles trying to acquire spin locks, there may be too
much contention in the device driver below VDO, and reducing the number of bioQ threads might
alleviate the situation. High CPU use (in acquiring spin locks or elsewhere) could also indicate contention
between NUMA nodes if, for example, the bioQ threads and the device interrupt handler are running on
different nodes. If the processor supports them, counters such as stalled-cycles-backend,
cache-misses, and node-load-misses may be of interest.

The sar utility can provide periodic reports on multiple system statistics. The sar -d 1 command
reports block device utilization levels (percentage of the time they have at least one I/O operation in
progress) and queue lengths (number of I/O requests waiting) once per second. However, not all block
device drivers can report such information, so the sar usefulness might depend on the device drivers in
use.

29.7. VDO COMMANDS


This section describes the following VDO utilities:

vdo
The vdo utility manages both the kvdo and UDS components of VDO.

285
Storage Administration Guide

It is also used to enable or disable compression.

vdostats
The vdostats utility displays statistics for each configured (or specified) device in a format similar to
the Linux df utility.

29.7.1. vdo
The vdo utility manages both the kvdo and UDS components of VDO.

Synopsis

vdo { activate | changeWritePolicy | create | deactivate |


disableCompression | disableDeduplication | enableCompression |
enableDeduplication | growLogical | growPhysical | list | modify |
printConfigFile | remove | start | status | stop }
[ options... ]

Sub-Commands

Table 29.4. VDO Sub-Commands

Sub-Command Description

286
CHAPTER 29. VDO INTEGRATION

Sub-Command Description

create Creates a VDO volume and its associated index and makes it available. If
−−activate=disabled is specified the VDO volume is created but
not made available. Will not overwrite an existing file system or
formatted VDO volume unless −−force is given. This command must
be run with root privileges. Applicable options include:

--name=volume (required)

--device=device (required)

--activate={enabled | disabled}

--indexMem=gigabytes

--blockMapCacheSize=megabytes

--blockMapPeriod=period

--compression={enabled | disabled}

--confFile=file

--deduplication={enabled | disabled}

--emulate512={enabled | disabled}

--sparseIndex={enabled | disabled}

--vdoAckThreads=thread count

--vdoBioRotationInterval=I/O count

--vdoBioThreads=thread count

--vdoCpuThreads=thread count

--vdoHashZoneThreads=thread count

--vdoLogicalThreads=thread count

--vdoLogLevel=level

--vdoLogicalSize=megabytes

--vdoPhysicalThreads=thread count

--readCache={enabled | disabled}

--readCacheSize=megabytes

--vdoSlabSize=megabytes

--verbose

--writePolicy={ auto | sync | async }

--logfile=pathname

287
Storage Administration Guide

Sub-Command Description

remove Removes one or more stopped VDO volumes and associated indexes.
This command must be run with root privileges. Applicable options
include:

{ --name=volume | --all } (required)

--confFile=file

--force

--verbose

--logfile=pathname

start Starts one or more stopped, activated VDO volumes and associated
services. This command must be run with root privileges. Applicable
options include:

{ --name=volume | --all } (required)

--confFile=file

--forceRebuild

--verbose

--logfile=pathname

stop Stops one or more running VDO volumes and associated services. This
command must be run with root privileges. Applicable options include:

{ --name=volume | --all } (required)

--confFile=file

--force

--verbose

--logfile=pathname

activate Activates one or more VDO volumes. Activated volumes can be started
using the
start

command. This command must be run with root privileges. Applicable


options include:

{ --name=volume | --all } (required)

--confFile=file

--logfile=pathname

--verbose

288
CHAPTER 29. VDO INTEGRATION

Sub-Command Description

deactivate Deactivates one or more VDO volumes. Deactivated volumes cannot be


started by the
start

command. Deactivating a currently running volume does not stop it.


Once stopped a deactivated VDO volume must be activated before it can
be started again. This command must be run with root privileges.
Applicable options include:

{ --name=volume | --all } (required)

--confFile=file

--verbose

--logfile=pathname

status Reports VDO system and volume status in YAML format. This command
does not require root privileges though information will be incomplete if
run without. Applicable options include:

{ --name=volume | --all } (required)

--confFile=file

--verbose

--logfile=pathname

See Table 29.6, “VDO Status Output” for the output provided.

list Displays a list of started VDO volumes. If −−all is specified it displays


both started and non‐started volumes. This command must be run with
root privileges. Applicable options include:

--all

--confFile=file

--logfile=pathname

--verbose

289
Storage Administration Guide

Sub-Command Description

modify Modifies configuration parameters of one or all VDO volumes. Changes


take effect the next time the VDO device is started; already‐running
devices are not affected. Applicable options include:

{ --name=volume | --all } (required)

--blockMapCacheSize=megabytes

--blockMapPeriod=period

--confFile=file

--vdoAckThreads=thread count

--vdoBioThreads=thread count

--vdoCpuThreads=thread count

--vdoHashZoneThreads=thread count

--vdoLogicalThreads=thread count

--vdoPhysicalThreads=thread count

--readCache={enabled | disabled}

--readCacheSize=megabytes

--verbose

--logfile=pathname

changeWritePolicy Modifies the write policy of one or all running VDO volumes. This
command must be run with root privileges.

{ --name=volume | --all } (required)

--writePolicy={ auto | sync | async }

(required)

--confFile=file

--logfile=pathname

--verbose

290
CHAPTER 29. VDO INTEGRATION

Sub-Command Description

enableDeduplication Enables deduplication on one or more VDO volumes. This command


must be run with root privileges. Applicable options include:

{ --name=volume | --all } (required)

--confFile=file

--verbose

--logfile=pathname

disableDeduplication Disables deduplication on one or more VDO volumes. This command


must be run with root privileges. Applicable options include:

{ --name=volume | --all } (required)

--confFile=file

--verbose

--logfile=pathname

enableCompression Enables compression on one or more VDO volumes. If the VDO volume
is running, takes effect immediately. If the VDO volume is not running
compression will be enabled the next time the VDO volume is started.
This command must be run with root privileges. Applicable options
include:

{ --name=volume | --all } (required)

--confFile=file

--verbose

--logfile=pathname

disableCompression Disables compression on one or more VDO volumes. If the VDO volume
is running, takes effect immediately. If the VDO volume is not running
compression will be disabled the next time the VDO volume is started.
This command must be run with root privileges. Applicable options
include:

{ --name=volume | --all } (required)

--confFile=file

--verbose

--logfile=pathname

291
Storage Administration Guide

Sub-Command Description

growLogical Adds logical space to a VDO volume. The volume must exist and must be
running. This command must be run with root privileges. Applicable
options include:

--name=volume (required)

--vdoLogicalSize=megabytes

(required)

--confFile=file

--verbose

--logfile=pathname

growPhysical Adds physical space to a VDO volume. The volume must exist and must
be running. This command must be run with root privileges. Applicable
options include:

--name=volume (required)

--confFile=file

--verbose

--logfile=pathname

printConfigFile Prints the configuration file to stdout . This command require root
privileges. Applicable options include:

--confFile=file

--logfile=pathname

--verbose

Options

Table 29.5. VDO Options

Option Description

--indexMem=gigabytes Specifies the amount of UDS server memory in gigabytes; the default
size is 1 GB. The special decimal values 0.25, 0.5, 0.75 can be used, as
can any positive integer.

--sparseIndex={enabled Enables or disables sparse indexing. The default is disabled.


| disabled}

--all Indicates that the command should be applied to all configured VDO
volumes. May not be used with --name .

292
CHAPTER 29. VDO INTEGRATION

Option Description

-- Specifies the amount of memory allocated for caching block map pages;
blockMapCacheSize=mega the value must be a multiple of 4096. Using a value with a B (ytes),
bytes K (ilobytes), M (egabytes), G (igabytes), T (erabytes), P (etabytes) or
E (xabytes) suffix is optional. If no suffix is supplied, the value will be
interpreted as megabytes. The default is 128M; the value must be at
least 128M and less than 16T. Note that there is a memory overhead of
15%.

-- A value between 1 and 16380 which determines the number of block


blockMapPeriod=period map updates which may accumulate before cached pages are flushed to
disk. Higher values decrease recovery time after a crash at the expense
of decreased performance during normal operation. The default value is
16380. Speak with your Red Hat representative before tuning this
parameter.

--compression={enabled Enables or disables compression within the VDO device. The default is
| disabled} enabled. Compression may be disabled if necessary to maximize
performance or to speed processing of data that is unlikely to compress.

--confFile=file Specifies an alternate configuration file. The default is


/etc/vdoconf.yml .

--deduplication= Enables or disables deduplication within the VDO device. The default is
{enabled | disabled} enabled. Deduplication may be disabled in instances where data is not
expected to have good deduplication rates but compression is still
desired.

--emulate512={enabled Enables 512-byte block device emulation mode. The default is


| disabled} disabled.

--force Unmounts mounted file systems before stopping a VDO volume.

--forceRebuild Forces an offline rebuild before starting a read-only VDO volume so that
it may be brought back online and made available. This option may
result in data loss or corruption.

--help Displays documentation for the vdo utility.

--logfile=pathname Specify the file to which this script's log messages are directed. Warning
and error messages are always logged to syslog as well.

--name=volume Operates on the specified VDO volume. May not be used with --all.

--device=device Specifies the absolute path of the device to use for VDO storage.

--activate={enabled | The argument disabled indicates that the VDO volume should only be
disabled} created. The volume will not be started or enabled. The default is
enabled.

293

You might also like