Linux Notes 1 500
Linux Notes 1 500
Kernel
o The core of the UNIX system. Loaded at system start up (boot). Memory-resident control
program.
o Manages the entire resources of the system, presenting them to you and every other user
as a coherent system. Provides service to user applications such as device management,
process scheduling, etc.
o Example functions performed by the kernel are:
Managing the machine's memory and allocating it to each process.
Scheduling the work done by the CPU so that the work of each user is carried out
as efficiently as is possible.
Accomplishing the transfer of data from one part of the machine to another
Interpreting and executing instructions from the shell
Enforcing file access permissions
o You do not need to know anything about the kernel in order to use a UNIX system. These
details are provided for your information only.
Shell
o Whenever you login to a Unix system you are placed in a shell program. The shell's
prompt is usually visible at the cursor's position on your screen. To get your work done,
you enter commands at this prompt.
o The shell is a command interpreter; it takes each command and passes it to the operating
system kernel to be acted upon. It then displays the results of this operation on your
screen.
o Several shells are usually available on any UNIX system, each with its own strengths and
weaknesses.
o Different users may use different shells. Initially, your system adminstrator will supply a
default shell, which can be overridden or changed. The most commonly available shells
are:
Bourne shell (sh)
C shell (csh)
Korn shell (ksh)
TC Shell (tcsh)
Bourne Again Shell (bash)
o Each shell also includes its own programming language. Command files, called "shell
scripts" are used to accomplish a series of tasks.
Utilities
An operating system or OS is a software program that enables the computer hardware to communicate
and operate with the computer software. Without a computer operating system, a computer and software
programs would be useless.
An operating system (sometimes abbreviated as "OS") is the program that, after being initially loaded into
the computer by a boot program, manages all the other programs in a computer. The other programs are
called applications or application programs. The application programs make use of the operating system
by making requests for services through a defined application program interface (API). In addition, users
can interact directly with the operating system through a user interface such as a command language or a
graphical user interface (GUI).
UNIX and 'UNIX-like' operating systems (such as Linux) consist of a kernel and some system programs.
There are also some application programs for doing work. The kernel is the heart of the operating system.
In fact, it is often mistakenly considered to be the operating system itself, but it is not. An operating
system provides many more services than a plain kernel.
It keeps track of files on the disk, starts programs and runs them concurrently, assigns memory and other
resources to various processes, receives packets from and sends packets to the network, and so on. The
kernel does very little by itself, but it provides tools with which all services can be built. It also prevents
anyone from accessing the hardware directly, forcing everyone to use the tools it provides. This way the
kernel provides some protection for users from each other. The tools provided by the kernel are used via
system calls.
The system programs use the tools provided by the kernel to implement the various services required
from an operating system. System programs, and all other programs, run `on top of the kernel', in what is
called the user mode. The difference between system and application programs is one of intent:
applications are intended for getting useful things done (or for playing, if it happens to be a game),
whereas system programs are needed to get the system working. A word processor is an application;
mount is a system program. The difference is often somewhat blurry, however, and is important only to
compulsive categorizers.
An operating system can also contain compilers and their corresponding libraries (GCC and the C library
in particular under Linux), although not all programming languages need be part of the operating
system. Documentation, and sometimes even games, can also be part of it.
Process management
Memory management
Hardware device drivers
Filesystem drivers
Network management
Various other bits and pieces
The following figure shows some of the more important parts of the Linux kernel
Probably the most important parts of the kernel (nothing else works without them) are memory
management and process management. Memory management takes care of assigning memory areas and
swap space areas to processes, parts of the kernel, and for the buffer cache. Process management creates
processes, and implements multitasking by switching the active process on the processor.
At the lowest level, the kernel contains a hardware device driver for each kind of hardware it supports.
Since the world is full of different kinds of hardware, the number of hardware device drivers is large.
There are often many otherwise similar pieces of hardware that differ in how they are controlled by
software. The similarities make it possible to have general classes of drivers that support similar
operations; each member of the class has the same interface to the rest of the kernel but differs in what it
needs to do to implement them. For example, all disk drivers look alike to the rest of the kernel, i.e., they
all have operations like `initialize the drive', `read sector N', and `write sector N'.
What is virtual memory?
Linux supports virtual memory, that is, using a disk as an extension of RAM so that the effective size of
usable memory grows correspondingly. The kernel will write the contents of a currently unused block of
memory to the hard disk so that the memory can be used for another purpose. When the original contents
are needed again, they are read back into memory. This is all made completely transparent to the user;
programs running under Linux only see the larger amount of memory available and don't notice that
parts of them reside on the disk from time to time. Of course, reading and writing the hard disk is slower
(on the order of a thousand times slower) than using real memory, so the programs don't run as fast. The
part of the hard disk that is used as virtual memory is called the swap space.
Linux can use either a normal file in the filesystem or a separate partition for swap space. A swap
partition is faster, but it is easier to change the size of a swap file (there's no need to repartition the whole
hard disk, and possibly install everything from scratch). When you know how much swap space you
need, you should go for a swap partition, but if you are uncertain, you can use a swap file first, use the
system for a while so that you can get a feel for how much swap you need, and then make a swap
partition when you're confident about its size.
You should also know that Linux allows one to use several swap partitions and/or swap files at the same
time. This means that if you only occasionally need an unusual amount of swap space, you can set up an
extra swap file at such times, instead of keeping the whole amount allocated all the time.
A note on operating system terminology: computer science usually distinguishes between swapping
(writing the whole process out to swap space) and paging (writing only fixed size parts, usually a few
kilobytes, at a time). Paging is usually more efficient, and that's what Linux does, but traditional Linux
terminology talks about swapping anyway.
Linux Structure
Linux is a layered operating system. The innermost layer is the hardware that provides the services for
the OS. The operating system, referred to in Linux as the kernel, interacts directly with the hardware and
provides the services to the user programs. These user programs don’t need to know anything about the
hardware. They just need to know how to interact with the kernel and it’s up to the kernel to provide the
desired service. One of the big appeals of Linux to programmers has been that most well written user
programs are independent of the underlying hardware, making them readily portable to new systems.
User programs interact with the kernel through a set of standard system calls. These system calls request
services to be provided by the kernel. Such services would include accessing a file: open close, read,
write, link, or execute a file; starting or updating accounting records; changing ownership of a file or
directory; changing to a new directory; creating, suspending, or killing a process; enabling access to
hardware devices; and setting limits on system resources.
Linux is a multi-user, multi-tasking operating system. You can have many users logged into a system
simultaneously, each running many programs. It’s the kernel’s job to keep each process and user separate
and to regulate access to system hardware, including cpu, memory, disk and other I/O devices.
Linux vs. Windows
Linux and Windows. Each has its own set of unique features, advantages and disadvantages. While it is
difficult to say which one is the better choice, it is not as difficult to answer which is the better choice
given your needs.
Note: The operating system that you use on your desktop computer (the vast majority of people use some flavor of
Windows) has absolutely nothing to do with the one that your host needs to serve your web site. Most personal sites
are created with MS FrontPage and even although that is a Microsoft product, it can be hosted perfectly on a
LINUX web server with FrontPage Extensions installed.
Stability:
LINUX systems (we actually use Linux but for comparison purposes they are identical) are hands-down
the winner in this category. There are many factors here but to name just a couple big ones: in our
experience LINUX handles high server loads better than Windows and LINUX machines seldom require
reboots while Windows is constantly needing them. Servers running on LINUX enjoy extremely high up-
time and high availability/reliability.
Performance:
While there is some debate about which operating system performs better, in our experience both
perform comparably in low-stress conditions however LINUX servers under high load (which is what is
important) are superior to Windows.
Scalability:
Web sites usually change over time. They start off small and grow as the needs of the person or
organization running them grow. While both platforms can often adapt to your growing needs, Windows
hosting is more easily made compatible with LINUX-based programming features like PHP and MySQL.
LINUX-based web software is not always 100% compatible with Microsoft technologies like .NET and VB
development. Therefore if you wish to use these, you should choose Windows web hosting.
Compatibility:
Web sites designed and programmed to be served under a LINUX-based web server can easily be hosted
on a Windows server, whereas the reverse is not always true. This makes programming for LINUX the
better choice.
Price:
Servers hosting your web site require operating systems and licenses just like everyone else. Windows
2003 and other related applications like SQL Server each cost a significant amount of money; on the other
hand, Linux is a free operating system to download, install and operate. Windows hosting results in
being a more expensive platform.
Conclusion:
To sum it up, LINUX-based hosting is more stable, performs faster and more compatible than Windows-
based hosting. You only need Windows hosting if you are going to developing in .NET or Visual Basic, or
some other application that limits your choices
Logging On To System
Before you can begin to use the system you will need to have a valid username and a password.
Assignment of usernames and initial passwords is typically handled by the System
Administrator
Your username, also called a userid, should be unique and should not change. Initial passwords
can be anything and should be changed after your first login.
Type your username at the login prompt, initial of your first name followed by last name (e.g
iafzal). LINUX is case sensitive - if your username is kellyk do not type KellyK . Press the
RETURN or ENTER key after typing your username.
When the password prompt appears, type in your password. Your password is never displayed
on the screen as a security measure. It also is case sensitive. Press the RETURN or ENTER key
after entering your password.
What happens after you successfully login depends upon your system, many LINUX systems
will display a login banner or "message of the day". Make a habit of reading this since it may
contain important information about the system.
Other LINUX systems will automatically configure your environment and open one or more
windows for you to do work in.
You should see a prompt - usually a percent sign (%) or dollar sign ($). This is called the "shell
prompt" (the shell is discussed in detail later). It indicates that the system is ready to accept
commands from you.
If your login attempt was unsuccessful, there are several possible reasons:
login: kellyk
kellyk's Password:
************************************************************
* Welcome to the Linux Systems Training Class
************************************************************
*
* Hello! (Greetings)
*
* System maintenance is scheduled today from 2:00
* until 4:00 pm EST
*
* (Thank you very much)
*
************************************************************
A file system is a logical collection of files on a partition or disk. A partition is a container for information
and can span an entire hard drive if desired.
Your hard drive can have various partitions which usually contains only one file system, such as one file
system housing the / file system or another containing the /home file system.
One file system per partition allows for the logical maintenance and management of differing file
systems.
Everything in Linux is considered to be a file, including physical devices such as DVD-ROMs, USB
devices, floppy drives, and so forth.
Directory Structure:
Linux uses a hierarchical file system structure, much like an upside-down tree, with root (/) at the base of
the file system and all other directories spreading from there.
A LINUX filesystem is a collection of files and directories that has the following properties:
It has a root directory (/) that contains other files and directories.
Each file or directory is uniquely identified by its name, the directory in which it resides, and a unique
identifier, typically called an inode.
By convention, the root directory has an inode number of 2 and the lost+found directory has an inode
number of 3. Inode numbers 0 and 1 are not used. File inode numbers can be seen by specifying the -i
option to ls command.
It is self contained. There are no dependencies between one filesystem and any other.
File System:
The difference between a disk or partition and the filesystem it contains is important. A few programs
(including, reasonably enough, programs that create filesystems) operate directly on the raw sectors of a
disk or partition; if there is an existing file system there it will be destroyed or seriously corrupted. Most
programs operate on a filesystem, and therefore won't work on a partition that doesn't contain one (or
that contains one of the wrong types).
Before a partition or disk can be used as a filesystem, it needs to be initialized, and the bookkeeping data
structures need to be written to the disk. This process is called making a filesystem.
Most LINUX filesystem types have a similar general structure, although the exact details vary quite a bit.
The central concepts are superblock, inode , data block, directory block , and indirection block. The
superblock contains information about the filesystem as a whole, such as its size (the exact information
here depends on the filesystem). An inode contains all information about a file, except its name. The
name is stored in the directory, together with the number of the inode. A directory entry consists of a
filename and the number of the inode which represents the file. The inode contains the numbers of
several data blocks, which are used to store the data in the file. There is space only for a few data block
numbers in the inode, however, and if more are needed, more space for pointers to the data blocks is
allocated dynamically. These dynamically allocated blocks are indirect blocks; the name indicates that in
order to find the data block, one has to find its number in the indirect block first.
LINUX filesystems usually allow one to create a hole in a file (this is done with the lseek() system call;
check the manual page), which means that the filesystem just pretends that at a particular place in the file
there is just zero bytes, but no actual disk sectors are reserved for that place in the file (this means that the
file will use a bit less disk space). This happens especially often for small binaries, Linux shared libraries,
some databases, and a few other special cases. (Holes are implemented by storing a special value as the
address of the data block in the indirect block or inode. This special address means that no data block is
allocated for that part of the file, ergo, there is a hole in the file.)
This topic is loosely based on the Filesystems Hierarchy Standard (FHS), which attempts to set a standard
for how the directory tree in a Linux system should be organized. Such a standard has the advantage
that it will be easier to write or port software for Linux, and to administer Linux machines, since
everything should be in standardized places. There is no authority behind the standard that forces
anyone to comply with it, but it has gained the support of many Linux distributions. It is not a good idea
to break with the FHS without very compelling reasons. The FHS attempts to follow Linux tradition and
current trends, making Linux systems familiar to those with experience with other Linux systems, and
vice versa.
The full directory tree is intended to be breakable into smaller parts, each capable of being on its own disk
or partition, to accommodate to disk size limits and to ease backup and other system administration
tasks. The major parts are the root (/ ), /usr , /var , and /home filesystems (see the following figure). Each
part has a different purpose. The directory tree has been designed so that it works well in a network of
Linux machines which may share some parts of the filesystems over a read-only device (e.g., a CD-ROM),
or over the network with NFS.
The roles of the different parts of the directory tree are described below
The root filesystem is specific for each machine (it is generally stored on a local disk, although it could be
a ramdisk or network drive as well) and contains the files that are necessary for booting the system up,
and to bring it up to such a state that the other filesystems may be mounted. The contents of the root
filesystem will therefore be sufficient for the single user state. It will also contain tools for fixing a broken
system, and for recovering lost files from backups.
The /usr filesystem contains all commands, libraries, manual pages, and other unchanging files needed
during normal operation. No files in /usr should be specific for any given machine, nor should they be
modified during normal use. This allows the files to be shared over the network, which can be cost-
effective since it saves disk space (there can easily be hundreds of megabytes, increasingly multiple
gigabytes in /usr). It can make administration easier (only the master /usr needs to be changed when
updating an application, not each machine separately) to have /usr network mounted. Even if the
filesystem is on a local disk, it could be mounted read-only, to lessen the chance of filesystem corruption
during a crash.
The /var filesystem contains files that change, such as spool directories (for mail, news, printers, etc), log
files, formatted manual pages, and temporary files. Traditionally everything in /var has been somewhere
below /usr , but that made it impossible to mount /usr read-only.
The /home filesystem contains the users' home directories, i.e., all the real data on the system. Separating
home directories to their own directory tree or filesystem makes backups easier; the other parts often do
not have to be backed up, or at least not as often as they seldom change. A big /home might have to be
broken across several filesystems, which requires adding an extra naming level below /home, for example
/home/students and /home/staff.
Although the different parts have been called filesystems above, there is no requirement that they
actually be on separate filesystems. They could easily be kept in a single one if the system is a small
single-user system and the user wants to keep things simple. The directory tree might also be divided
into filesystems differently, depending on how large the disks are, and how space is allocated for various
purposes. The important part, though, is that all the standard names work; even if, say, /var and /usr are
actually on the same partition, the names /usr/lib/libc.a and /var/log/messages must work, for example by
moving files below /var into /usr/var, and making /var a symlink to /usr/var.
The Linux filesystem structure groups files according to purpose, i.e., all commands are in one place, all
data files in another, documentation in a third, and so on. An alternative would be to group files
according to the program they belong to, i.e., all Emacs files would be in one directory, all TeX in another,
and so on. The problem with the latter approach is that it makes it difficult to share files (the program
directory often contains both static and sharable and changing and non-sharable files), and sometimes to
even find the files (e.g., manual pages in a huge number of places, and making the manual page
programs find all of them is a maintenance nightmare).
The root filesystem should generally be small, since it contains very critical files and a small, infrequently
modified filesystem has a better chance of not getting corrupted. A corrupted root filesystem will
generally mean that the system becomes unbootable except with special measures (e.g., from a floppy), so
you don't want to risk it.
The root directory generally doesn't contain any files, except perhaps on older systems where the
standard boot image for the system, usually called /vmlinuz was kept there. (Most distributions have
moved those files the the /boot directory.
1. / – Root
Every single file and directory starts from the root directory.
Only root user has write privilege under this directory.
Please note that /root is root user’s home directory, which is not same as /.
Contains binaries, libraries, documentation, and source-code for second level programs.
/usr/bin contains binary files for user programs. If you can’t find a user binary under /bin,
look under /usr/bin. For example: at, awk, cc, less, scp
/usr/sbin contains binary files for system administrators. If you can’t find a system binary
under /sbin, look under /usr/sbin. For example: atd, cron, sshd, useradd, userdel
/usr/lib contains libraries for /usr/bin and /usr/sbin
/usr/local contains users programs that you install from source. For example, when you
install apache from source, it goes under /usr/local/apache2
Contains library files that supports the binaries located under /bin and /sbin
Library filenames are either ld* or lib*.so.*
For example: ld-2.11.1.so, libncurses.so.5.7
LINUX permits file names to use most characters, but avoid spaces, tabs and characters that have
a special meaning to the shell, such as:
Case Sensitivity: uppercase and lowercase are not the same! These are three different files:
Hidden Files: have names that begin with a dot (.) For example:
.cshrc .login .mailrc .mwmrc
Uniqueness: as children in a family, no two files with the same parent directory can have the
same name. Files located in separate directories can have identical names.
Reserved Filenames:
/ - the root directory (slash)
. - current directory (period)
.. - parent directory (double period)
~ - your home directory (tilde)
Passwords Standards
When your account is issued, you will be given an initial password. It is important for system and
personal security that the password for your account be changed to something of your choosing. The
command for changing a password is "passwd". You will be asked both for your old password and to
type your new selected password twice. If you mistype your old password or do not type your new
password the same way twice, the system will indicate that the password has not been changed.
Some system administrators have installed programs that check for appropriateness of password (is it
cryptic enough for reasonable system security). A password change may be rejected by this program.
When choosing a password, it is important that it be something that could not be guessed -- either by
somebody unknown to you trying to break in, or by an acquaintance who knows you. Suggestions for
choosing and using a password follow:
Don't
use a word (or words) in any language
use a proper name
use information that can be found in your wallet
use information commonly known about you (car license, pet name, etc)
use control characters. Some systems can't handle them
write your password anywhere
ever give your password to *anybody*
Do
use a mixture of character types (alphabetic, numeric, special)
use a mixture of upper case and lower case
use at least 6 characters
choose a password you can remember
change your password often
make sure nobody is looking over your shoulder when you are entering your password
Change Password in LINUX
To modify a user's password or your own password in LINUX use the passwd command. Open the
terminal and then type the passwd command entering the new password, the characters entered do not
display on screen, in order to avoid the password being seen by a passer-by. The passwd command
prompts for the new password twice in order to detect any typing errors. The encrypted password is
stored in /etc/shadow file.
Sample outputs:
Enter new LINUX password:
Retype new LINUX password:
passwd: password updated successfully
Sample outputs:
(current) LINUX password:
Enter new LINUX password:
Retype new LINUX password:
passwd: password updated successfully
Difference between locate and find command in Linux
Two popular commands for locating files on Linux are find and locate. Depending on the size of your
file system and the depth of your search, the find command can sometime take a long time to scan all of
the data. For example, if you search your entire filesystem for the files named data.txt:
More likely than not, this will take on the order of minutes, if not longer to return. A quicker method is
to use the locate command:
# locate data.txt
However, this efficiency comes at a cost, the data reported in the output of locate isn’t as fresh as the
data reported by the find command. By default, the system will run updatedb which takes a snapshot of
the system files once a day, locate uses this snapshot to quickly report what files are where. However,
recent file additions or removals (within 24 hours) are not recorded in the snapshot and are unknown
to locate.
The find command has a number of options and is very configurable. There are many ways to reduce
the depth and breadth of your search and make it more efficient.
locate uses a previously built database, If database is not updated then locate command will not show
the output. to sync the database it is must to execute updatedb command.
# updatedb
How to Use Wildcards
A wildcard is a character that can be used as a substitute for any of a class of characters in a search,
thereby greatly increasing the flexibility and efficiency of searches.
Wildcards are commonly used in shell commands in Linux and other Unix-like operating systems. A
shell is a program that provides a text-only user interface and whose main function is to execute
commands typed in by users and display their results.
Wildcards are also used in regular expressions and programming languages. Regular expressions are a
pattern matching system that uses strings (i.e., sequences of characters) constructed according to pre-
defined syntax rules to find desired strings in text.
The term wildcard or wild card was originally used in card games to describe a card that can be assigned
any value that its holder desires. However, its usage has spread so that it is now used to describe an
unknown or unpredictable factor in a variety of fields.
Star Wildcard
Three types of wildcards are used with Linux commands. The most frequently employed and usually the
most useful is the star wildcard, which is the same as an asterisk (*). The star wildcard has the broadest
meaning of any of the wildcards, as it can represent zero characters, all single characters or any string.
As an example, the file command provides information about any filesystem object (i.e., file, directory or
link) that is provided to it as an argument (i.e., input). Because the star wildcard represents every string,
it can be used as the argument for file to return information about every object in the specified directory.
Thus, the following would display information about every object in the current directory (i.e., the
directory in which the user is currently working):
file *
If there are no matches, an error message is returned, such as *: can't stat `*' (No such file or directory).. In
the case of this example, the only way that there would be no matches is if the directory were empty.
Wildcards can be combined with other characters to represent parts of strings. For example, to represent
any filesystem object that has a .jpg filename extension, *.jpg would be used. Likewise, a* would
represent all objects that begin with a lower case (i.e., small) letter a.
As another example, the following would tell the ls command (which is used to list files) to provide the
names of all files in the current directory that have an .html or a .txt extension:
ls *.html *.txt
Likewise, the following would tell the rm command (which is used to remove files and directories) to
delete all files in the current directory that have the string xxx in their name:
rm *xxx*
Question Mark Wildcard
The question mark (?) is used as a wildcard character in shell commands to represent exactly one
character, which can be any single character. Thus, two question marks in succession would represent
any two characters in succession, and three question marks in succession would represent any string
consisting of three characters.
Thus, for example, the following would return data on all objects in the current directory whose names,
inclusive of any extensions, are exactly three characters in length:
file ???
And the following would provide data on all objects whose names are one, two or three characters in
length:
file ? ?? ???
As is the case with the star wildcard, the question mark wildcard can be used in combination with other
characters. For example, the following would provide information about all objects in the current
directory that begin with the letter a and are five characters in length:
file a????
The question mark wildcard can also be used in combination with other wildcards when separated by
some other character. For example, the following would return a list of all files in the current directory
that have a three-character filename extension:
ls *.???
The third type of wildcard in shell commands is a pair of square brackets, which can represent any of the
characters enclosed in the brackets. Thus, for example, the following would provide information about all
objects in the current directory that have an x, y and/or z in them:
file *[xyz]*
And the following would list all files that had an extension that begins with x, y or z:
ls *.[xyz]*
The same results can be achieved by merely using the star and question mark wildcards. However, it is
clearly more efficient to use the bracket wildcard.
When a hyphen is used between two characters in the square brackets wildcard, it indicates a range
inclusive of those two characters. For example, the following would provide information about all of the
objects in the current directory that begin with any letter from a through f:
file [a-f]*
And the following would provide information about every object in the current directory whose name
includes at least one numeral:
file *[0-9]*
The use of the square brackets to indicate a range can be combined with its use to indicate a list. Thus, for
example, the following would provide information about all filesystem objects whose names begin with
any letter from a through c or begin with s or t:
file [a-cst]*
Likewise, multiple sets of ranges can be specified. Thus, for instance, the following would return
information about all objects whose names begin with the first three or the final three lower case letters of
the alphabet:
file [a-cx-z]*
Sometimes it can be useful to have a succession of square bracket wildcards. For example, the following
would display all filenames in the current directory that consist of jones followed by a three-digit
number:
ls jones[0-9][0-9][0-9]
\ (backslash) = is used as an "escape" character, i.e. to protect a subsequent special character. Thus, "\\"
searches for a backslash. Note you may need to use quotation marks and backslash(es).
^ (caret) = means "the beginning of the line". So "^a" means find a line starting with an "a".
$ (dollar sign) = means "the end of the line". So "a$" means find a line ending with an "a".
For example, this command searches the file myfile for lines starting with an "s" and ending with an "n",
and prints them to the standard output (screen):
Example:
Create two files:
$ touch blah1
$ touch blah2
And as expected:
$ ln blah1 blah1-hard
$ ln -s blah2 blah2-soft
$ ls -l
blah1
blah1-hard
blah2
blah2-soft -> blah2
$ mv blah1 blah1-new
$ cat blah1-hard
Cat
blah1-hard points to the inode, the contents, of the file - that wasn't changed.
$ mv blah2 blah2-new
$ ls blah2-soft
blah2-soft
$ cat blah2-soft
cat: blah2-soft: No such file or directory
The contents of the file could not be found because the soft link points to the name, that was changed,
and not to the contents.
Similarly, If blah1 is deleted, blah1-hard still holds the contents; if blah2 is deleted, blah2-soft is just a link
to a non-existing file.
List folders and files in a directory The command: Information Commands:
Written By: Alexandros Mavridis ls - list directory contents ls --version
ls --help
info ls
Contents man ls
Listing Folders
Options Used In This Document
Non Hidden Folders Page 1
-r, --reverse
reverse order while sorting
Hidden Folders Page 3
-l use a long listing format
Non Hidden And Hidden Folders Page 4
-t sort by modification time, newest
Listing Files
first
Non Hidden Files Page 5
-i, --inode
print the index number of each file
Hidden Files Page 7
-a, --all
Non Hidden And Hidden Files Page 8
do not ignore entries starting with .
Listing Folders and Files
-d, --directory
list directories themselves, not their
Non Hidden Folders and Files Page 9
contents
Hidden Folders And Files Page 13
-p, --indicator-style=slash
append / indicator to directories
Non Hidden And Hidden Folders And Files Page 20
--group-directories-first
Sources Page 23
group directories before files
ls -d */ | wc -l
A. Listing Folders
Command Output
Command Output
Command Output
B. Listing Files
Command Output
ls -pltr | grep -v / Prints in detail all non hidden files in the current
ls -ltr | grep -v ^d working directory in reverse chronological
ls -lt | grep '^\-' order, going from oldest to newest.
ls -pi | grep -v / Prints all non hidden files in the current working
directory, including inode numbers, in
alphabetical order.
ls -pri | grep -v / Prints all non hidden files in the current working
directory, including inode numbers, in reverse
alphabetical order.
ls -pli | grep -v / Prints in detail all non hidden files in the current
working directory, including inode numbers, in
ls -li | grep -v ^d alphabetical order.
ls -plri | grep -v / Prints in detail all non hidden files in the current
working directory, including inode numbers, in
ls -lri | grep -v ^d reverse alphabetical order.
Prints all non hidden files in the current working
ls -pti | grep -v / directory, including inode numbers, in
chronological order, going from newest to oldest.
Prints all non hidden files in the current working
directory, including inode numbers, in reverse
ls -ptri | grep -v /
chronological order, going from oldest to
newest.
ls -plti | grep -v / Prints in detail all non hidden files in the current
working directory, including inode numbers, in
ls -lti | grep -v ^d chronological order, going from newest to oldest.
ls -pltri | grep -v / Prints in detail all non hidden files in the current
working directory, including inode numbers, in
reverse chronological order, going from oldest to
ls -ltri | grep -v ^d
newest.
Hidden Files
Command Output
Command Output
ls -pa | grep -v / Prints all non hidden and hidden files in the
current working directory in alphabetical order.
ls -pra | grep -v / Prints all non hidden and hidden files in the
current working directory in reverse alphabetical
order.
ls -pla | grep -v / Prints in detail all non hidden and hidden files
ls -la | grep -v ^d in the current working directory in alphabetical
ls -la | grep '^\-' order.
ls -prla | grep -v / Prints in detail all non hidden and hidden files in
ls -rla | grep -v ^d the current working directory in reverse
ls -lra | grep '^\-' alphabetical order.
ls -pltra | grep -v / Prints in detail all non hidden and hidden files in
ls -ltra | grep -v ^d the current working directory in reverse
chronological order, going from oldest to
ls -ltra | grep '^\-' newest.
Prints all non hidden and hidden files in the
ls -pai | grep -v / current working directory, including inode
numbers, in alphabetical order.
Prints all non hidden and hidden files in the
ls -prai | grep -v / current working directory, including inode
numbers, in reverse alphabetical order.
ls -plai | grep -v / Prints in detail all non hidden and hidden files in
the current working directory, including inode
numbers, in alphabetical order.
ls -prlai | grep -v / Prints in detail all non hidden and hidden files in
the current working directory, including inode
numbers, in reverse alphabetical order.
ls -ptia | grep -v / Prints all non hidden and hidden files in the
current working directory, including inode
numbers, in chronological order, going from
newest to oldest.
ls -ptrai | grep -v / Prints all non hidden and hidden files in the
current working directory, including inode
numbers, in reverse chronological order, going
from oldest to newest.
ls -pltai | grep -v / Prints in detail all non hidden and hidden files in
the current working directory, including inode
numbers, in chronological order, going from
newest to oldest.
ls -pltrai | grep -v / Prints in detail all non hidden and hidden files in
the current working directory, including inode
numbers, in reverse chronological order, going
from oldest to newest.
Command Output
Command Output
Command Output
_____________________
Sources:
https://stackoverflow.com/questions/14352290/listing-only-directories-using-ls-in-bash-an-examination
https://serverfault.com/questions/368370/how-do-i-exclude-directories-when-listing-files
https://www.cyberciti.biz/faq/bash-shell-display-only-hidden-dot-files/
https://askubuntu.com/questions/468901/how-to-show-only-hidden-files-in-terminal
A command is a program that tells the Linux system to do something. It has the form:
command [options] [arguments]
where an argument indicates on what the command is to perform its action, usually a file or series of
files. An option modifies the command, changing the way it performs. Commands are case sensitive.
command and Commands are not the same.
Options are generally preceded by a hyphen (-), and for most commands, more than one option can be
strung together, in the form:
command -[option][option][option]
e.g.:
ls –alR = will perform a long list on all files in the current directory and recursively
perform the list through all sub-directories.
For most commands you can separate the options, preceding each with a hyphen, e.g.:
command -option1 -option2 -option3
as in: ls -a -l -R
Some commands have options that require parameters. Options requiring parameters are usually
specified separately, e.g.:
lpr -Pprinter3 -# 2 file
will send 2 copies of file to printer3.
These are the standard conventions for commands. However, not all Linux commands will follow the
standard. Some don’t require the hyphen before options and some won’t let you group options
together, i.e. they may require that each option be preceded by a hyphen and separated by whitespace
from other options and arguments.
Options and syntax for a command are listed in the man page for the command.
File Permissions
• UNIX is a multi-user system. Every file and directory in your account can be protected from or
made accessible to other users by changing its access permissions. Every user has responsibility
for controlling access to their files.
Example:
A permission of 4 or r would specify read permissions. If the permissions desired are read and write,
the 4 (representing read) and the 2 (representing write) are added together to make a permission of 6.
Therefore, a permission setting of 6 would allow read and write permissions.
Common Options
-f force (no error message is generated if the change is unsuccessful)
-R recursively descend through the directory structure and change the modes
Examples
If the permission desired for file1 is user: read, write, execute, group: read, execute, other: read,
execute, the command to use would be
Reminder: When giving permissions to group and other to use a file, it is necessary to allow at least
execute permission to the directories for the path in which the file is located. The easiest way to do
this is to be in the directory for which permissions need to be granted:
File Ownership
Syntax
chown [options] user[:group] file (SVR4)
chown [options] user[.group] file (BSD)
Common Options
-R recursively descend through the directory structure
-f force, and don’t report any errors
Examples
# chown new_owner file
Syntax
chgrp [options] group file
Common Options
-R recursively descend through the directory structure
-f force, and don’t report any errors
Examples
% chgrp
Getting Help
o The "man" command man gives you access to an on-line manual which potentially contains a
complete description of every command available on the system. In practice, the manual
usually contains a subset of all commands.
o man can also provide you with one line descriptions of commands which match a specified
keyword
o The online manual is divided into sections:
Section Description
------- -----------
1 User Commands
2 System Commands
3 Subroutines
4 Devices
5 File Formats
6 Games
7 Miscellaneous
8 System Administration
l Local Commands
n New Commands
By default, the man page in section 1 is displayed if multiple sections exist. You can access a
different section by specifying the section. For example:
man 8 telnetd
Keyword searching: use the -k option followed by the keyword. Two examples appear below.
man -k mail
man -k 'copy files'
The following example shows how command-line completion works in Bash. Other command line shells
may perform slightly differently.
fir
Then we press Tab ↹ and because the only command in our system that starts with "fir" is "firefox", it
will be completed to:
firefox
firefox i
But this time introduction-to-command-line-completion.html is not the only file in the current directory
that starts with "i". The directory also contains files introduction-to-bash.html and introduction-to-
firefox.html. The system can't decide which of these filenames we wanted to type, but it does know that
the file must begin with "introduction-to-", so the command will be completed to:
firefox introduction-to-
firefox introduction-to-c
firefox introduction-to-command-line-completion.html
In short we typed:
This is just eight keystrokes, which is considerably less than 52 keystrokes we would have needed to type
without using command-line completion.
Rotating completion
The following example shows how command-line completion works with rotating completion, such as
Windows's CMD uses.
firefox i
firefox introduction-to-bash.html
firefox introduction-to-command-line-completion.html
In short we typed:
echo command
echo “Your text goes here” > filename (To add text and create a new file)
echo “Additional text” >> filename (To append to an existing file)
cp command
cp exisiting-file new-filename (To copy an existing file to new file)
cat existing-file > new-filename (cat the content of an existing file and add to new file. This
command does the same as above)
vi command
vi filename (Create a new file and enter text using vi insert mode)
Pipes
A pipe is used by the shell to connect the stdout of one command directly to the stdin of another
command.
The symbol for a pipe is the vertical bar ( | ). The command syntax is:
Pipes accomplish with one command what otherwise would take intermediate files and multiple
commands. For example, operation 1 and operation 2 are equivalent:
Operation 1
who > temp
sort temp
Operation 2
who | sort
ls -al | more
who | more
ps ug | grep myuserid
who | grep kelly
File Maintenance Commands
cp
Copies files. Will overwrite unless otherwise specified. Must also have write permission in the
destination directory.
Example:
cp sample.f sample2.f - copies sample.f to sample2.f
cp -R dir1 dir2 - copies contents of directory dir1 to dir2
cp -i file.1 file.new - prompts if file.new will be overwritten
cp *.txt chapt1 - copies all files with .txt suffix to directory
chapt1
cp /usr/doc/README ~ - copies file to your home directory
cp ~betty/index . - copies the file "index" from user betty's
home directory to current directory
rm
Deletes/removes files or directories if file permissions permit
Example:
rm sample.f - deletes sample.f
rm chap?.txt - deletes all files with chap as the first four
characters of their name and with .txt as the last
four characters of their name
rm -i * - deletes all files in current directory but asks
first for each file
rm -r /olddir - recursively removes all files in the directory
olddir, including the directory itself
mv
Moves files. It will overwrite unless otherwise specified. Must also have write permission in the
destination directory.
Example:
mv sample.f sample2.f - moves sample.f to sample2.f
mv dir1 newdir/dir2 - moves contents of directory dir1 to
newdir/dir2
mv -i file.1 file.new - prompts if file.new will be overwritten
mv *.txt chapt1 - moves all files with .txt suffix to
directory chapt1
mkdir
Make directory. Will create the new directory in your working directory by default.
Example:
mkdir /u/training/data
mkdir data2
rmdir
Remove directory. Directories must be empty before you remove them.
rmdir project1
To recursively remove nested directories, use the rm command with the -r option:
rm -r dirctory_name
chgrp
Changes the group ownership of a file or directory.
Syntax
chgrp [ -f ] [ -h ] [-R ] Group { File ... | Directory ... }
chgrp -R [ -f ] [ -H | -L | -P ] Group { File... | Directory... }
Description
The chgrp command changes the group of the file or directory specified by the File or Directory parameter
to the group specified by the Group parameter. The value of the Group parameter can be a group name
from the group database or a numeric group ID. When a symbolic link is encountered and you have not
specified the -h or -P flags, the chgrp command changes the group ownership of the file or directory
pointed to by the link and not the group ownership of the link itself.
chown
The chown command is used to change the owner and group of files, directories and links. By default,
the owner of a filesystem object is the user that created it. The group is a set of users that share the same
access permissions (i.e., read, write and execute) for that object. The basic syntax for using chown to
change owners is
new_owner is the user name or the numeric user ID (UID) of the new owner, and object is the name of
the target file, directory or link. The ownership of any number of objects can be changed simultaneously.
For example, the following would transfer the ownership of a file named file1 and a directory named dir1
to a new owner named alice:
chown alice file1 dir1
In order to perform the above command, most systems are configured by default to require access to the
root (i.e., system administrator) account, which can be obtained on a personal computer by using the su
(i.e., substitute user) command. An error message will be returned in the event that the user does not
have the proper permissions or that the specified new owner or target(s) does not exist (or is spelled
incorrectly).
The ownership and group of a filesystem object can be confirmed by using the ls command with its -l (i.e.,
long) option. The owner is shown in the third column and the group in the fourth. Thus, for example, the
owner and group of file1 can be seen by using the following:
ls -l file1
or
The only difference between the two versions is that the name or numeric ID of the new group is
preceded directly by a colon in the former and by a dot in the latter; there is no functional difference. In
this case, chown performs the same function as the chgrp (i.e., change group) command.
The owner and group can be changed simultaneously by combining the syntaxes for changing owner and
group. That is, the name or UID of the new owner is followed directly (i.e., with no intervening spaces)
by a period or colon, which is followed directly by the name or numeric ID of the new group, which, in
turn, is followed by a space and then by the names of the target files, directories and/or links.
Thus, for example, the following would change the owner of a file named file2 to the user with the user
name bob and change its group to group2:
If a user name or UID is followed directly by a colon or dot but no group name is provided, then the
group is changed to that user's login group. Thus, for example, the following would change the
ownership of file3 to cathy and would also change that file's group to the login group of the new owner
(which by default is usually the same as the new owner):
chown cathy: file3
Among chown's few options is -R, which operates on filesystem objects recursively. That is, when used
on a directory, it can change the ownership and/or group of all objects within the directory tree beginning
with that directory rather than just the ownership of the directory itself.
The -v (verbose) option provides information about every object processed. The -c is similar, but reports
only when a change is made. The --help option displays the documentation found in the man online
manual, and the --version option outputs version information
chmod
Change access permissions, change mode.
Syntax
chmod [Options]... Mode [,Mode]... file...
chmod [Options]... Numeric_Mode file...
chmod [Options]... --reference=RFile file...
Options
-f, --silent, --quiet suppress most error messages
-v, --verbose output a diagnostic for every file processed
-c, --changes like verbose but report only when a change is made
--reference=RFile use RFile's mode instead of MODE values
-R, --recursive change files and directories recursively
--help display help and exit
--version output version information and exit
chmod changes the permissions of each given file according to mode, where mode describes the
permissions to modify. Mode can be specified with octal numbers or with letters. Using letters is easier to
understand for most people.
Permissions:
Numeric mode:
From one to four octal digits
Any omitted digits are assumed to be leading zeros.
The first digit = selects attributes for the set user ID (4) and set group ID (2) and save text image (1)S
The second digit = permissions for the user who owns the file: read (4), write (2), and execute (1)
The third digit = permissions for other users in the file's group: read (4), write (2), and execute (1)
The fourth digit = permissions for other users NOT in the file's group: read (4), write (2), and execute (1)
The octal (0-7) value is calculated by adding up the values for each digit
User (rwx) = 4+2+1 = 7
Group(rx) = 4+1 = 5
World (rx) = 4+1 = 5
chmode mode = 0755
Examples
chmod 400 file - Read by owner
chmod 040 file - Read by group
chmod 004 file - Read by world
Symbolic Mode
The format of a symbolic mode is a combination of the letters +-= rwxXstugoa
Multiple symbolic operations can be given, separated by commas.
The full syntax is [ugoa...][[+-=][rwxXstugo...]...][,...] but this is explained below.
A combination of the letters ugoa controls which users' access to the file will be changed:
User letter
The user who owns it u
Other users in the file's Group g
Other users not in the file's group o
All users a
If none of these are given, the effect is as if was given, but bits that are set in the umask are not affected.
All users a is effectively user + group + others
The operator '+' causes the permissions selected to be added to the existing permissions of each file; '-'
causes them to be removed; and '=' causes them to be the only permissions that the file has.
The letters 'rwxXstugo' select the new permissions for the affected users:
Permission letter
Read r
Write w
Execute (or access for directories) x
Execute only if the file is a directory
(or already has execute permission for some user) X
Set user or group ID on execution s
Save program text on swap device t
Examples
Deny execute permission to everyone:
chmod a-x file
Allow everyone to read, write, and execute the file and turn on the set group-ID:
chmod =rwx,g+s file
Notes:
When chmod is applied to a directory:
read = list files in the directory
write = add new files to the directory
execute = access files in the directory
chmod never changes the permissions of symbolic links. This is not a problem since the permissions of
symbolic links are never used. However, for each symbolic link listed on the command line, chmod
changes the permissions of the pointed-to file. In contrast, chmod ignores symbolic links encountered
during recursive directory traversals
Syntax
cat [options] [file]
Common Options
-n precede each line with a line number
-v display non-printing characters, except tabs, new-lines, and form-feeds
-e display $ at the end of each line (prior to new-line) (when used with -v option)
Examples
% cat filename
You can list a series of files on the command line, and cat will concatenate them, starting each in turn,
immediately after completing the previous one, e.g.:
% cat file1 file2 file3
Syntax
more [options] [+/pattern] [filename]
less [options] [+/pattern] [filename]
pg [options] [+/pattern] [filename]
Options
more less pg Action
-c -c -c clear display before displaying
-i ignore case
-w default default don’t exit at end of input, but prompt and wait
-lines -lines # of lines/screenful
+/pattern +/pattern +/pattern search for the pattern
Internal Controls
more displays (one screen at a time) the file requested
<space bar> to view next screen
<return> or <CR> to view one more line
q to quit viewing the file
h help
b go back up one screenful
/word search for word in the remainder of the file
See the man page for additional options
less similar to more; see the man page for options
pg the SVR4 equivalent of more (page)
-------------------------------------------------------------------------------
Syntax
echo [string]
Common Options
-n don’t print <new-line> (BSD, shell built-in)
\c don’t print <new-line> (SVR4)
\0n where n is the 8-bit ASCII character code (SVR4)
\t tab (SVR4)
\f form-feed (SVR4)
\n new-line (SVR4)
\v vertical tab (SVR4)
Examples
% echo Hello Class or echo "Hello Class"
To prevent the line feed:
% echo -n Hello Class or echo "Hello Class \c"
where the style to use in the last example depends on the echo command in use.
The \x options must be within pairs of single or double quotes, with or without other string characters.
-------------------------------------------------------------------------------
Syntax
head [options] file
Common Options
-n number number of lines to display, counting from the top of the file
-number same as above
Examples
By default head displays the first 10 lines. You can display more with the "-n number", or
"-number" options, e.g., to display the first 40 lines:
% head -40 filename or head -n 40 filename
-------------------------------------------------------------------------------
more
Browses/displays files one screen at a time.
Example:
more sample.f
-------------------------------------------------------------------------------
Syntax
tail [options] file
Common Options
-number number of lines to display, counting from the bottom of the file
Examples
The default is to display the last 10 lines, but you can specify different line or byte numbers, or a
different starting point within the file. To display the last 30 lines of a file use the -number style:
% tail -30 filename
Filter / Text Processing Commands
grep
The grep utility is used to search for generalized regular expressions occurring in Linux files. Regular
expressions, such as those shown above, are best specified in apostrophes (or single quotes) when
specified in the grep utility. The egrep utility provides searching capability using an extended set of
meta-characters. The syntax of the grep utility, some of the available options, and a few examples are
shown below.
Syntax
grep [options] regexp [file[s]]
Common Options
-i ignore case
-c report only a count of the number of lines containing matches, not the matches
themselves
-v invert the search, displaying only lines that do not match
-n display the line number along with the line on which a match was found
-s work silently, reporting only the final status:
0, for match(es) found
1, for no matches
2, for errors
-l list filenames, but not lines, in which matches were found
Examples
Consider the following file:
cat num.list
1 15 fifteen
2 14 fourteen
3 13 thirteen
4 12 twelve
5 11 eleven
6 10 ten
8 8 eight
9 7 seven
10 6 six
11 5 five
14 2 two
15 1 one
Here are some grep examples using this file. In the first we’ll search for the number 15:
> grep '15' num.list
1 15 fifteen
15 1 one
Now we’ll use the "-c" option to count the number of lines matching the search criterion:
> grep -c '15' num.list
2
Here we’ll be a little more general in our search, selecting for all lines containing the character 1
followed by either of 1, 2 or 5:
> grep '1[125]' num.list
1 15 fifteen
4 12 twelve
5 11 eleven
11 5 five
12 4 four
15 1 one
Now we’ll search for all lines that begin with a space:
> grep '^ ' num.list
1 15 fifteen
2 14 fourteen
3 13 thirteen
4 12 twelve
5 11 eleven
6 10 ten
7 9 nine
8 8 eight
9 7 seven
The latter could also be done by using the -v option with the original search string, e.g.:
> grep -v '^ ' num.list
10 6 six
11 5 five
12 4 four
13 3 three
14 2 two
15 1 one
Here we search for all lines that begin with the characters 1 through 9:
> grep '^[1-9]' num.list
10 6 six
11 5 five
12 4 four
13 3 three
14 2 two
15 1 one
This example will search for any instances of t followed by zero or more occurrences of e:
> grep 'te*' num.list
1 15 fifteen
2 14 fourteen
3 13 thirteen
4 12 twelve
6 10 ten
8 8 eight
13 3 three
14 2 two
This example will search for any instances of t followed by one or more occurrences of e:
> grep 'tee*' num.list
1 15 fifteen
2 14 fourteen
3 13 thirteen
6 10 ten
We can also take our input from a program, rather than a file. Here we report on any lines output by
the who program that begin with the letter l.
> who | grep '^l'
lcondron ttyp0 Dec 1 02:41 (lcondron-pc.acs.)
sed
The non-interactive, stream editor, sed, edits the input stream, line by line, making the specified
changes, and sends the result to standard output.
Syntax
sed [options] edit_command [file]
The format for the editing commands are:
[address1[,address2]][function][arguments]
where the addresses are optional and can be separated from the function by spaces or tabs. The
function is required. The arguments may be optional or required, depending on the function in use.
Line-number Addresses are decimal line numbers, starting from the first input line and incremented
by one for each. If multiple input files are given the counter continues cumulatively through the files.
The last input line can be specified with the "$" character.
Context Addresses are the regular expression patterns enclosed in slashes (/).
and should be quoted with single quotes (’) if additional options or functions are specified. These
patterns are identical to context addresses, except that while they are normally enclosed in slashes (/),
any normal character is allowed to function as the delimiter, other than <space> and <newline>.
The replacement string is not a regular expression pattern; characters do not have special meanings
here, except:
These special characters can be escaped with a backslash (\) to remove their special meaning
Common Options
-e script edit script
-n don’t print the default output, but only those lines specified by p or s///p functions
-f script_file take the edit scripts from the file, script_file
Examples
This example changes all incidents of a comma (,) into a comma followed by a space (, ) when doing
output:
% cat filey | sed s/,/,\ /g
The following example removes all incidents of Jr preceded by a space ( Jr) in filey:
% cat filey | sed s/\ Jr//g
To perform multiple operations on the input precede each operation with the -e (edit) option and
quote the strings. For example, to filter for lines containing "Date: " and "From: " and replace these
without the colon (:), try:
sed -e ’s/Date: /Date /’ -e ’s/From: /From /’
To print only those lines of the file from the one beginning with "Date:" up to, and including, the one
beginning with "Name:" try:
sed -n ’/^Date:/,/^Name:/p’
To print only the first 10 lines of the input (a replacement for head):
sed -n 1,10p
awk searches its input for patterns and performs the specified operation on each line, or fields of the
line, that contain those patterns. You can specify the pattern matching statements for awk either on
the command line, or by putting them in a file and using the -f program_file option.
Syntax
awk program [file]
where program is composed of one or more:
pattern { action }
fields. Each input line is checked for a pattern match with the indicated action being taken on a
match. This continues through the full sequence of patterns, then the next line of input is checked.
Input is divided into records and fields. The default record separator is <newline>, and the variable
NR keeps the record count. The default field separator is whitespace, spaces and tabs, and the
variable NF keeps the field count. Input field, FS, and record, RS, separators can be set at any time to
match any single character. Output field, OFS, and record, ORS, separators can also be changed to
any single character, as desired. $n, where n is an integer, is used to represent the nth field of the
input record, while $0 represents the entire input record.
BEGIN and END are special patterns matching the beginning of input, before the first field is read,
and the end of input, after the last field is read, respectively.
Printing is allowed through the print, and formatted print, printf, statements.
Comma separated patterns define the range for which the pattern is applicable, e.g.:
/first/,/last/
selects all lines starting with the one containing first, and continuing inclusively, through the one
containing last.
Regular expressions must be enclosed with slashes (/) and meta-characters can be escaped with the
backslash (\). Regular expressions can be grouped with the operators:
| or, to separate alternatives
+ one or more
? zero or one
So the program:
$1 ~ /[Ff]rank/
is true if the first field, $1, contains "Frank" or "frank" anywhere within the field. To match a field
identical to "Frank" or "frank" use:
$1 ~ /^[Ff]rank$/
Offhand you don’t know if variables are strings or numbers. If neither operand is known to be
numeric, than string comparisons are performed. Otherwise, a numeric comparison is done. In the
absence of any information to the contrary, a string comparison is done, so that:
$1 > $2
will compare the string values. To ensure a numerical comparison do something similar to:
( $1 + 0 ) > $2
The mathematical functions: exp, log and sqrt are built-in
Flow control statements using if-else, while, and for are allowed with C type syntax:
for (i=1; i <= NF; i++) {actions}
while (i<=NF) {actions}
if (i<NF) {actions}
Common Options
-f program_file read the commands from program_file
-Fc use character c as the field separator character
Examples
% cat filex | tr a-z A-Z | awk -F: '{printf ("7R %-6s %-9s %-24s \n",$1,$2,$3)}'>upload.file
The cut command allows a portion of a file to be extracted for another use.
Syntax
Common Options
-c character_list character positions to select (first character is 1)
-d delimiter field delimiter (defaults to <TAB>)
-f field_list fields to select (first field is 1)
Both the character and field lists may contain comma-separated or blank-character-separated
numbers (in increasing order), and may contain a hyphen (-) to indicate a range. Any numbers
missing at either before (e.g. -5) or after (e.g. 5-) the hyphen indicates the full range starting with the first,
or ending with the last character or field, respectively. Blank-character-separated lists must be enclosed in
quotes. The field delimiter should be enclosed in quotes if it has special meaning to the shell, e.g. when
specifying a <space> or <TAB> character.
Examples
In these examples we will use the file users:
If you only wanted the username and the user's real name, the cut command could be used to get only
that information:
The cut command can also be used with other options. The -c option allows characters to be the
selected cut. To select the first 4 characters:
The paste command allows two files to be combined side-by-side. The default delimiter between the
columns in a paste is a tab, but options allow other delimiters to be used.
Syntax
paste [options] file1 file2
Common Options
-d list list of delimiting characters
-s concatenate lines
The list of delimiters may include a single character such as a comma; a quoted string, such as a
space; or any of the following escape sequences:
\n <newline> character
\t <tab> character
\\ backslash character
\0 empty string (non-null character)
Examples
Given the file users:
jdoe John Doe 4/15/96
lsmith Laura Smith 3/12/96
pchen Paul Chen 1/5/96
jhsu Jake Hsu 4/17/96
sphilip Sue Phillip 4/2/96
and the file phone:
John Doe 555-6634
Laura Smith 555-3382
Paul Chen 555-0987
Jake Hsu 555-1235
Sue Phillip 555-7623
the paste command can be used in conjunction with the cut command to create a new file, listing, that
includes the username, real name, last login, and phone number of all the users. First, extract the phone
numbers into a temporary file, temp.file:
% cut -f2 phone > temp.file
555-6634
555-3382
555-0987
555-1235
555-7623
The result can then be pasted to the end of each line in users and directed to the new file, listing:
% paste users temp.file > listing
jdoe John Doe 4/15/96 237-6634
lsmith Laura Smith 3/12/96 878-3382
pchen Paul Chen 1/5/96 888-0987
jhsu Jake Hsu 4/17/96 545-1235
sphilip Sue Phillip 4/2/96 656-7623
This could also have been done on one line without the temporary file as:
% cut -f2 phone | paste users - > listing
with the same results. In this case the hyphen (-) is acting as a placeholder for an input field (namely,
the output of the cut command).
The sort command is used to order the lines of a file. Various options can be used to choose the order as
well as the field on which a file is sorted. Without any options, the sort compares entire lines in the file
and outputs them in ASCII order (numbers first, upper case letters, then lower case letters).
Syntax
sort [options] [+pos1 [ -pos2 ]] file
Common Options
-b ignore leading blanks (<space> & <tab>) when determining starting and
ending characters for the sort key
-d dictionary order, only letters, digits, <space> and <tab> are significant
-f fold upper case to lower case
-k keydef sort on the defined keys (not available on all systems)
-i ignore non-printable characters
-n numeric sort
-o outfile output file
-r reverse the sort
-t char use char as the field separator character
-u unique; omit multiple copies of the same line (after the sort)
+pos1 [-pos2] (old style) provides functionality similar to the "-k keydef" option.
For the +/-position entries pos1 is the starting word number, beginning with 0 and pos2 is the ending
word number. When -pos2 is omitted the sort field continues through the end of the line. Both pos1 and
pos2 can be written in the form w.c, where w is the word number and c is the character within the word.
For c 0 specifies the delimiter preceding the first character, and 1 is the first character of the word. These
entries can be followed by type modifiers, e.g. n for numeric, b to skip blanks, etc.
where:
start_field, end_field define the keys to restrict the sort to a portion of the line
type modifies the sort, valid modifiers are given the single characters (bdfiMnr)
from the similar sort options, e.g. a type b is equivalent to "-b", but applies
only to the specified field
Examples
In the file users:
jdoe John Doe 4/15/96
lsmith Laura Smith 3/12/96
pchen Paul Chen 1/5/96
jhsu Jake Hsu 4/17/96
sphilip Sue Phillip 4/2/96
sort users yields the following:
jdoe John Doe 4/15/96
jhsu Jake Hsu 4/17/96
lsmith Laura Smith 3/12/96
pchen Paul Chen 1/5/96
sphilip Sue Phillip 4/2/96
If, however, a listing sorted by last name is desired, use the option to specify which field to sort on (fields
are numbered starting at 0):
% sort +2 users:
pchen Paul Chen 1/5/96
jdoe John Doe 4/15/96
jhsu Jake Hsu 4/17/96
sphilip Sue Phillip 4/2/96
lsmith Laura Smith 3/12/96
A particularly useful sort option is the -u option, which eliminates any duplicate entries in a file while
ordering the file. For example, the file todays.logins:
sphillip
jchen
jdoe
lkeres
jmarsch
ageorge
lkeres
proy
jchen
shows a listing of each username that logged into the system today. If we want to know how many
unique users logged into the system today, using sort with the -u option will list each user only once.
(The command can then be piped into "wc -l" to get a number):
% sort -u todays.logins
ageorge
jchen
jdoe
jmarsch
lkeres
proy
sphillip
Syntax
uniq [options] [+|-n] file [file.new]
Common Options
-d one copy of only the repeated lines
-u select only the lines not repeated
+n ignore the first n characters
-s n same as above (SVR4 only)
-n skip the first n fields, including any blanks (<space> & <tab>)
-f fields same as above (SVR4 only)
Examples
Consider the following file and example, in which uniq removes the 4th line from file and places the
result in a file called file.new.
$ cat file
1 2 3 6
4 5 3 6
7 8 9 0
7 8 9 0
$ cat file.new
1 2 3 6
4 5 3 6
7 8 9 0
Below, the -n option of the uniq command is used to skip the first 2 fields in file, and filter out lines
which are duplicates from the 3rd field onward.
$ uniq -2 file
1 2 3 6
7 8 9 0
Syntax
tee [options] [file[s]]
Common Options
-a append the output to the files
-i ignore interrupts
Examples
In this first example the output of who is displayed on the screen and stored in the file users.file:
> who | tee users.file
condron ttyp0 Apr 22 14:10 (lcondron-pc.acs.)
frank ttyp1 Apr 22 16:19 (nyssa)
condron ttyp9 Apr 22 15:52 (lcondron-mac.acs)
In this next example the output of who is sent to the files users.a and users.b. It is also piped to the
wc command, which reports the line count.
> who | tee users.a users.b | wc -l
3
In the following example a long directory listing is sent to the file files.long. It is also piped to the
grep command which reports which files were last modified in August.
> ls -l | tee files.long |grep Aug
1 drwxr-sr-x 2 condron 512 Aug 8 1995 News/
2 -rw-r--r-- 1 condron 1076 Aug 8 1995 magnus.cshrc
2 -rw-r--r-- 1 condron 1252 Aug 8 1995 magnus.login
uname –a
cat /etc/redhat-release
dmidecode
uname:
Sometimes it is required to quickly determine details like kernel name, version, hostname, etc of the
Linux box you are using.
Even though you can find all these details in respective files present under the proc filesystem, it is easier
to use uname utility to get these information quickly.
uname [OPTION]...
Now lets look at some examples that demonstrate the usage of ‘uname’ command.
uname without any option
When the ‘uname’ command is run without any option then it prints just the kernel name. So the output
below shows that its the ‘Linux’ kernel that is used by this system.
$ uname
Linux
You can also use uname -s, which also displays the kernel name.
$ uname -s
Linux
Use uname -n option to fetch the network node host name of your Linux box.
$ uname -n
dev-server
The output above will be the same as the output of the hostname command.
Get kernel release using -r option
uname command can also be used to fetch the kernel release information. The option -r can be used for
this purpose.
$ uname -r
2.6.32-100.28.5.el6.x86_64
uname command can also be used to fetch the kernel version information. The option -v can be used for
this purpose.
$ uname -v
#1 SMP Wed Feb 2 18:40:23 EST 2011
uname command can also be used to fetch the machine hardware name. The option -m can be used for
this purpose. This indicates that it is a 64-bit system.
$ uname -m
x86_64
uname command can also be used to fetch the processor type information. The option -p can be used for
this purpose. If the uname command is not able to fetch the processor type information then it produces
‘unknown’ in the output.
$ uname -p
x86_64
Sometimes you might see ‘unknown’ as the output of this command, if uname was not able to fetch the
information on processor type.
Get the hardware platform using -i option
uname command can also be used to fetch the hardware platform information. The option -i can be used
for this purpose. If the uname command is not able to fetch the hardware platform information then it
produces ‘unknown’ in the output.
$ uname -i
x86_64
Sometimes you might see ‘unknown’ as the output of this command, if uname was not able to fetch the
information about the platform.
Get the operating system name using the -o option
uname command can also be used to fetch the operating system name. The option -o can be used for this
purpose.
For example :
$ uname -o
GNU/Linux
cat /etc/redhat-release:
This file provides information about your system distribution and its version
You can also run /etc/*rel* for systems that are not on CentOS or Redhat
Dmidecode:
dmidecode is a tool for dumping a computer's DMI (some say SMBIOS) table contents in a human-
readable format. This table contains a description of the system's hardware components, as well as other
useful pieces of information such as serial numbers and BIOS revision. Thanks to this table, you can
retrieve this information without having to probe for the actual hardware.
Take a look at
man dmidecode
to find out all options. The most common option is the --type switch which takes one or more of the
following keywords:
Type Information
----------------------------------------
0 BIOS
1 System
2 Base Board
3 Chassis
4 Processor
5 Memory Controller
6 Memory Module
7 Cache
8 Port Connector
9 System Slots
10 On Board Devices
11 OEM Strings
12 System Configuration Options
13 BIOS Language
14 Group Associations
15 System Event Log
16 Physical Memory Array
17 Memory Device
18 32-bit Memory Error
19 Memory Array Mapped Address
20 Memory Device Mapped Address
21 Built-in Pointing Device
22 Portable Battery
23 System Reset
24 Hardware Security
25 System Power Controls
26 Voltage Probe
27 Cooling Device
28 Temperature Probe
29 Electrical Current Probe
30 Out-of-band Remote Access
31 Boot Integrity Services
32 System Boot
33 64-bit Memory Error
34 Management Device
35 Management Device Component
36 Management Device Threshold Data
37 Memory Channel
38 IPMI Device
39 Power Supply
Keyword Types
------------------------------
bios 0, 13
system 1, 12, 15, 23, 32
baseboard 2, 10
chassis 3
processor 4
memory 5, 6, 16, 17
cache 7
connector 8
slot 9
server1:/home/admin#
server1:/home/admin#
server1:/home/admin#
server1:/home/admin#
server1:/home/admin#
server1:/home/admin#
server1:/home/admin#
server1:/home/admin#
dmidecode --type slot
Permissions
Permissions on Unix and other systems like it are split into three classes:
User
Group
Other
If a user is not the owner, nor a member of the group, then they are classified as other.
Changing permissions
In order to change permissions, we need to first understand the two notations of permissions.
1. Symbolic notation
2. Octal notation
Symbolic notation
Symbolic notation is what you'd see on the left-hand side if you ran a command like ls -l in a terminal.
The first character in symbolic notation indicates the file type and isn't related to permissions in any way. The
remaining characters are in sets of three, each representing a class of permissions.
The first class is the user class. The second class is the group class. The third class is the other class.
Each of the three characters for a class represents the read, write and execute permissions.
Octal notation
Octal (base-8) notation consists of at least 3 digits (sometimes 4, the left-most digit, which represents the setuid
bit, the setgid bit, and the sticky bit).
Each of the three right-most digits are the sum of its component bits in the binary numeral system.
For example:
So what number would you use if you wanted to set a permission to read and write? 4 + 2 = 6.
Symbolic
Octal notation Plain English
notation
-rwxr--r-- 0744 user class can read/write/execute; group class can read; other class can read
-rw-rw-r-- 0664 user class can read/write; group class can read/write; other class can read
Let's use the examples from the symbolic notation section and show how it'd convert to octal notation
CHMOD commands
Now that we have a better understanding of permissions and what all of these letters and numbers mean, let's take
Permission
(symbolic CHMOD command Description
nocation)
-rwxrwxrwx chmod 0777 filename; chmod -R 0777 dir All classes can read/write/execute
-rw-r--r-- chmod 0644 filename; chmod -R 0644 dir user class can read/write; all others can read
-rw-rw-rw- chmod 0666 filename' chmod -R 0666 dir All classes can read/write
a look at how we can use the chmod command in our terminal to change permissions to anything we'd like!
These are just some examples. Using your new-found knowledge, you can set any permissions you'd like! Just be
careful and make sure you don't break your system.
Access Control Lists(ACL) in Linux
What is ACL ?
Access control list (ACL) provides an additional, more flexible permission mechanism for file systems. It
is designed to assist with UNIX file permissions. ACL allows you to give permissions for any user or
group to any disc resource
Use of ACL :
Think of a scenario in which a particular user is not a member of group created by you but still you
want to give some read or write access, how can you do it without making user a member of group,
here comes in picture Access Control Lists, ACL helps us to do this trick.
setfacl and getfacl are used for setting up ACL and showing ACL respectively.
For example :
getfacl test/seinfeld.txt
Output:
# file: test/seinfeld.txt
# owner: iafzal
# group: iafzal
user::rw-
group::rw-
other::r--
3) To allow all files or directories to inherit ACL entries from the directory it is within
setfacl -dm "entry" /path/to/dir
To add permissions for a group (group is either the group name or ID):
# setfacl -m "g:group:permissions"
To allow all files or directories to inherit ACL entries from the directory it is within:
# setfacl -dm "entry"
Example :
View ACL :
To show permissions :
# getfacl filename
Observe the difference between output of getfacl command before and after setting up ACL permissions
using setfacl command.
Remove ACL :
If you want to remove the set ACL permissions, use setfacl command with -b option.
For example :
You can also check if there are any extra permissions set through ACL using ls command.
Observe the first command output in image, there is extra “+” sign after the permissions like -rw-rwxr–
+, this indicates there are extra ACL permissions set which you can check by getfacl command
vi Commands
Entering vi
Exiting vi
By character
left arrow - left one character
right arrow - right one character
backspace - left one character
space - right one character
h - left one character
l - right one character
By word
w - beginning of next word
nw - beginning of nth next word
b - back to previous word
nb - back to nth previous word
e - end of next word
ne - end of nth next word
By line
down arrow - down one line
up arrow - up one line
j - down one line
k - up one line
+ - beginning of next line down
- - beginning of previous line up
0 - first column of current line (zero)
^ - first character of current line
$ - last character of current line
By block
( - beginning of sentence
) - end of sentence
{ - beginning of paragraph
} - end of paragraph
By screen
CTRL-f - forward 1 screen
CTRL-b - backward 1 screen
CTRL-d - down 1/2 screen
CTRL-u - up 1/2 screen
H - top line on screen
M - mid-screen
L - last line on screen
Within file
nG - line n within file
1G - first line in file
G - last line in file
Inserting text
Deleting text
Changing text
Searching / Substituting
Miscellaneous commands
--------------------------------------------------------------------------------
vi Options
You can change the way vi operates by changing the value of certain options which control
specific parts of the vi environment.
To set an option during a vi session, use one of the commands below as required by the option:
:set option_name
:set option_name=value
Options can be set permanently by putting them in a file called .exrc in your home directory. A
sample .exrc file appears below. Note that you do not need the colon (:) as part of the option
specification when you put the commands in a .exrc file. Also note that you can put them all on
one line.
useradd
To create a new user in Linux. A different options can be used to modify userId, home directory
etc.
userdel
This command is used to delete the user. Please note this command alone will not delete the user
home directory. You will have to use option –r to delete user home directory
groupadd
Creates a new group
groupdel
Removes an existing group
usermod
Modify user attributes such as user home directory, user group, user ID etc.
User Files
/etc/passwd = This file has all user’s attributes
/etc/shadow = This file contains encrypted user password and password policy
/etc/group = All group and user group information
Creating User Accounts in Linux:
When we run ‘useradd‘ command in Linux terminal, it performs following major things:
It edits /etc/passwd, /etc/shadow, /etc/group and /etc/gshadow files for the newly created User account.
Creates and populate a home directory for the new user.
Sets permissions and ownerships to home directory.
In this article we will show you the most used 15 useradd commands with their practical examples in
Linux. We have divided the section into two parts from Basic to Advance usage of command.
To add/create a new user, all you’ve to follow the command ‘useradd‘ or ‘adduser‘ with ‘username’. The
‘username’ is a user login name, that is used by user to login into the system.
Only one user can be added and that username must be unique (different from other username already
exists on the system).
For example, to add a new user called ‘solider‘, use the following command.
When we add a new user in Linux with ‘useradd‘ command it gets created in locked state and to unlock
that user account, we need to set a password for that account with ‘passwd‘ command.
solider:x:504:504:solider:/home/solider:/bin/bash
The above entry contains a set of seven colon-separated fields, each field has it’s own meaning. Let’s see
what are these fields:
Username: User login name used to login into system. It should be between 1 to 32 charcters long.
Password: User password (or x character) stored in /etc/shadow file in encrypted format.
User ID (UID): Every user must have a User ID (UID) User Identification Number. By default UID 0 is
reserved for root user and UID’s ranging from 1-99 are reserved for other predefined accounts. Further
UID’s ranging from 100-999 are reserved for system accounts and groups.
Group ID (GID): The primary Group ID (GID) Group Identification Number stored in /etc/group file.
User Info: This field is optional and allow you to define extra information about the user. For example,
user full name. This field is filled by ‘finger’ command.
Home Directory: The absolute location of user’s home directory.
Shell: The absolute location of a user’s shell i.e. /bin/bash.
By default ‘useradd‘ command creates a user’s home directory under /home directory with username.
Thus, for example, we’ve seen above the default home directory for the user ‘solider‘ is ‘/home/solider‘.
However, this action can be changed by using ‘-d‘ option along with the location of new home directory
(i.e. /home/newusers). For example, the following command will create a user ‘solider‘ with a home
directory ‘/home/newusers‘.
You can see the user home directory and other user related information like user id, group id, shell and
comments.
In Linux, every user has its own UID (Unique Identification Number). By default, whenever we create a
new user accounts in Linux, it assigns userid 500, 501, 502 and so on…
But, we can create user’s with custom userid with ‘-u‘ option. For example, the following command will
create a user ‘navin‘ with custom userid ‘999‘.
Now, let’s verify that the user created with a defined userid (999) using following command.
NOTE: Make sure the value of a user ID must be unique from any other already created users on the
system.
4. Create a User with Specific Group ID
Similarly, every user has its own GID (Group Identification Number). We can create users with specific
group ID’s as well with -g option.
Here in this example, we will add a user ‘tarunika‘ with a specific UID and GID simultaneously with the
help of ‘-u‘ and ‘-g‘ options.
The ‘-G‘ option is used to add a user to additional groups. Each group name is separated by a comma,
with no intervening spaces.
Here in this example, we are adding a user ‘solider‘ into multiple groups like admins, webadmin and
developer.
Next, verify that the multiple groups assigned to the user with id command.
In some situations, where we don’t want to assign a home directories for a user’s, due to some security
reasons. In such situation, when a user logs into a system that has just restarted, its home directory will
be root. When such user uses su command, its login directory will be the previous user home directory.
To create user’s without their home directories, ‘-M‘ is used. For example, the following command will
create a user ‘shilpi‘ without a home directory.
Now, let’s verify that the user is created without home directory, using ls command.
By default, when we add user’s with ‘useradd‘ command user account never get expires i.e their expiry
date is set to 0 (means never expired).
However, we can set the expiry date using ‘-e‘ option, that sets date in YYYY-MM-DD format. This is
helpful for creating temporary accounts for a specific period of time.
Here in this example, we create a user ‘aparna‘ with account expiry date i.e. 27th April 2014 in YYYY-
MM-DD format.
Next, verify the age of account and password with ‘chage‘ command for user ‘aparna‘ after setting
account expiry date.
Here in this example, we will set a account password expiry date i.e. 45 days on a user ‘solider’ using ‘-e‘
and ‘-f‘ options.
The ‘-c‘ option allows you to add custom comments, such as user’s full name, phone number, etc to
/etc/passwd file. The comment can be added as a single line without any spaces.
For example, the following command will add a user ‘mansi‘ and would insert that user’s full name,
Manis Khurana, into the comment field.
Sometimes, we add users which has nothing to do with login shell or sometimes we require to assign
different shells to our users. We can assign different login shells to a each user with ‘-s‘ option.
Here in this example, will add a user ‘solider‘ without login shell i.e. ‘/sbin/nologin‘ shell.
The following command will create a user ‘ravi‘ with home directory ‘/var/www/solider‘, default shell
/bin/bash and adds extra information about user.
[root@localhost ~]# useradd -m -d /var/www/ravi -s /bin/bash -c "Solider Owner" -U ravi
In the above command ‘-m -d‘ option creates a user with specified home directory and the ‘-s‘ option set
the user’s default shell i.e. /bin/bash. The ‘-c‘ option adds the extra information about user and ‘-U‘
argument create/adds a group with the same name as the user.
12. Add a User with Home Directory, Custom Shell, Custom Comment and UID/GID
The command is very similar to above, but here we defining shell as ‘/bin/zsh‘ and custom UID and GID
to a user ‘tarunika‘. Where ‘-u‘ defines new user’s UID (i.e. 1000) and whereas ‘-g‘ defines GID (i.e. 1000).
13. Add a User with Home Directory, No Shell, Custom Comment and User ID
The following command is very much similar to above two commands, the only difference is here, that
we disabling login shell to a user called ‘avishek‘ with custom User ID (i.e. 1019).
Here ‘-s‘ option adds the default shell /bin/bash, but in this case we set login to ‘/usr/sbin/nologin‘. That
means user ‘avishek‘ will not able to login into the system.
14. Add a User with Home Directory, Shell, Custom Skell/Comment and User ID
The only change in this command is, we used ‘-k‘ option to set custom skeleton directory i.e.
/etc/custom.skell, not the default one /etc/skel. We also used ‘-s‘ option to define different shell i.e.
/bin/tcsh to user ‘navin‘.
15. Add a User without Home Directory, No Shell, No Group and Custom Comment
This following command is very different than the other commands explained above. Here we used ‘-M‘
option to create user without user’s home directory and ‘-N‘ argument is used that tells the system to
only create username (without group). The ‘-r‘ arguments is for creating a system user.
For more information and options about useradd, run ‘useradd‘ command on the terminal to see
available options.
Read Also: 15 usermod Command Examples
Share
+
0
0
0
Ask Anything
If You Appreciate What We Do Here On Solider, You Should Consider:
Ravi Saive
I am Ravi Saive, creator of Solider. A Computer Geek and Linux Guru who loves to share tricks and tips
on Internet. Most Of My Servers runs on Open Source Platform called Linux. Follow Me: Twitter,
Facebook and Google+
Your name can also be listed here. Got a tip? Submit it here to become an Solider author.
RedHat RHCE and RHCSA Certification Book
Linux Foundation LFCS and LFCE Certification Preparation Guide
Next story
Fun in Linux Terminal – Play with Word and Character Counts
Previous story
nSnake: A Clone of Old Classic Snake Game – Play in Linux Terminal
17 Jan, 2017
Display Command File Contents in Column Format
0
Display Command Output or File Contents in Column Format
6 Feb, 2018
Convert RPM to DEB and DEB to RPM
8
How to Convert From RPM to DEB and DEB to RPM Package Using Alien
26 Aug, 2015
88 Responses
Comments4
Pingbacks0
Decontee K Sawyer
October 26, 2017 at 6:29 pm
Hi Ravi. Your suggestion to go directly to the source documentation to understand the requirements
and details is an exceedingly excellent one. You have obviously done so, and translated the English it is
written in, into whatever your native language is. A link to your interpretation, in your native language
would be more helpful than the confusing broken English found here.
Reply
Anuj
October 16, 2017 at 4:01 pm
Hi Ravi,
I have one problem, from client side I have a request to add a new user with username having space, I
mean username of two words.
For example,
Reply
Ravi Saive
October 25, 2017 at 11:52 am
@Anuj,
Hi Ravi, I have a question, If use: su “user” type the password and the system say: su: System Error,
why is this message?
Reply
« Older Comments
Got something to say? Join the discussion.
Comment
Name *
Email *
Website
Notify me of followup comments via e-mail. You can also subscribe without commenting.
Switch Users and Sudo Access:
Switch Users:
Following is the user switch command that can be used to switch from one user to another
su - username
su - invokes a login shell after switching the user. A login shell resets most environment variables,
providing a clean base.
su username
just switches the user, providing a normal shell with an environment nearly the same as with the old user
Sudo Access:
sudo command-name
The above command “sudo command-name” will run any command owned and authorized by root account
as long as that user is authorized to run it in /etc/sudoers file
# useradd USERNAME
3. Set a password for the new user using the passwd command.
4. # passwd USERNAME
5. Changing password for user USERNAME.
6. New password:
7. Retype new password:
passwd: all authentication tokens updated successfully.
8. Run the visudo to edit the /etc/sudoers file. This file defines the policies applied by
the sudo command.
# visudo
9. Find the lines in the file that grant sudo access to users in the group wheel when enabled.
10. ## Allows people in group wheel to run all commands
# %wheel ALL=(ALL) ALL
11. Remove the comment character (#) at the start of the second line. This enables the
configuration option.
12. Save your changes and exit the editor.
13. Add the user you created to the wheel group using the usermod command.
# usermod -aG wheel USERNAME
14. Test that the updated configuration allows the user you created to run commands using
sudo.
1. Use the su to switch to the new user account that you created.
# su USERNAME -
2. Use the groups to verify that the user is in the wheel group.
3. $ groups
USERNAME wheel
4. Use the sudo command to run the whoami command. As this is the first time you
have run a command using sudo from this user account the banner message will
be displayed. You will be also be prompted to enter the password for the user
account.
5. $ sudo whoami
6. We trust you have received the usual lecture from the local
System
7. Administrator. It usually boils down to these three things:
8.
9. #1) Respect the privacy of others.
10. #2) Think before you type.
11. #3) With great power comes great responsibility.
12.
13. [sudo] password for USERNAME:
root
The last line of the output is the user name returned by the whoami command. If
sudo is configured correctly this value will be root.
You have successfully configured a user with sudo access. You can now log in to this user
account and use sudo to run commands as if you were logged in to the account of the root user.
Linux Editors
who
last
w
id
who
As a Linux user, sometimes it is required to know some basic information like :
Though this type of information can be obtained from various files in the Linux system but there
is a command line utility 'who' that does exactly the same for you. In this article, we will discuss
the capabilities and features provided by the 'who' command.
This is done by simply running the 'who' command (without any options). Consider the
following example:
$ who
iafzal tty7 2012-08-07 05:33 (:0)
iafzal pts/0 2012-08-07 06:47 (:0.0)
iafzal pts/1 2012-08-07 07:58 (:0.0)
last command:
To find out when a particular user last logged in to the Linux or Unix server.
Syntax
last
last [userNameHere]
last [tty]
last [options] [userNameHere]
If no options provided last command displays a list of all users logged in (and out). You can
filter out results by supplying names of users or terminal to show only those entries matching the
username/tty.
To find out who has recently logged in and out on your server, type:
$ last
Sample outputs:
last command searches back through the file /var/log/wtmp file and the output may go back to
several months. Just use the less command or more command as follows to display output one
screen at a time:
$ last | more
last | less
Sample outputs:
Hide hostnames (Linux only)
By default year is now displayed by last command. You can force last command to display full
login and logout times and dates by passing -F option:
$ last -F
Sample outputs:
Display full user/domain names
$ last -w
The user reboot logs in each time the system is rebooted. Thus following command will show a
log of all reboots since the log file was created:
$ last reboot
$ last -x reboot
Find out the system shutdown entries and run level changes:
$ last -x
$ last -x shutdown
The syntax is as follows to see the state of logins as of the specified time:
$ last -t YYYYMMDDHHMMSS
$ last -t YYYYMMDDHHMMSS userNameHere
w command:
Options:
-h, --no-header do not print header
-u, --no-current ignore current process username
-s, --short short format
-f, --from show remote hostname field
-o, --old-style old style output
-i, --ip-addr display IP address instead of hostname (if possible)
id command:
Print user and group information for the specified USER,
or (when USER omitted) for the current user.
-a ignore, for compatibility with other versions
-Z, --context print only the security context of the current user
-g, --group print only the effective group ID
-G, --groups print all group IDs
-n, --name print a name instead of a number, for -ugG
-r, --real print the real ID instead of the effective ID, with -ugG
-u, --user print only the effective user ID
-z, --zero delimit entries with NUL characters, not whitespace;
not permitted in default format
--help display this help and exit
--version output version information and exit
System Utility Commands:
date
uptime
hostname
uname
which
cal
bc
date
Print or set the system date and time
Mandatory arguments to long options are mandatory for short options too.
-d, --date=STRING display time described by STRING, not 'now'
-f, --file=DATEFILE like --date once for each line of DATEFILE
-I[TIMESPEC], --iso-8601[=TIMESPEC] output date/time in ISO 8601 format.
TIMESPEC='date' for date only (the default),
'hours', 'minutes', 'seconds', or 'ns' for date
and time to the indicated precision.
-r, --reference=FILE display the last modification time of FILE
-R, --rfc-2822 output date and time in RFC 2822 format.
Example: Mon, 07 Aug 2006 12:34:56 -0600
--rfc-3339=TIMESPEC output date and time in RFC 3339 format.
TIMESPEC='date', 'seconds', or 'ns' for
date and time to the indicated precision.
Date and time components are separated by
a single space: 2006-08-07 12:34:56-06:00
-s, --set=STRING set time described by STRING
-u, --utc, --universal print or set Coordinated Universal Time (UTC)
--help display this help and exit
--version output version information and exit
uptime:
Tell how long the system has been running
uptime gives a one line display of the following information. The current time, how long the system has
been running, how many users are currently logged on, and the system load averages for the past 1, 5,
and 15 minutes
Options:
-p, --pretty show uptime in pretty format
-h, --help display this help and exit
-s, --since system up since
-V, --version output version information and exit
hostname
Show or set the system's host name
Program options:
-a, --alias alias names
-A, --all-fqdns all long host names (FQDNs)
-b, --boot set default hostname if none available
-d, --domain DNS domain name
-f, --fqdn, --long long host name (FQDN)
-F, --file read host name or NIS domain name from given file
-i, --ip-address addresses for the host name
-I, --all-ip-addresses all addresses for the host
-s, --short short host name
-y, --yp, --nis NIS/YP domain name
Description:
This command can get or set the host name or the NIS domain name. You can
also get the DNS domain or the FQDN (fully qualified domain name).
Unless you are using bind or NIS for host lookups you can change the
FQDN (Fully Qualified Domain Name) and the DNS domain name (which is
part of the FQDN) in the /etc/hosts file
uname
This command will give you system information. It is one of the important command that should be
used every time you login to a Linux/Unix machine.
which
Shows the full path of (shell) commands
cal and bc
cal command is simply for calendar and bc is for calculator
Processes
Whenever you enter a command at the shell prompt, it invokes a program. While this
program is running it is called a process. Your login shell is also a process, created for
you upon logging in and existing until you logout.
LINUX is a multi-tasking operating system. Any user can have multiple processes
running simultaneously, including multiple login sessions. As you do your work within
the login shell, each command creates at least one new process while it executes.
Process id: every process in a LINUX system has a unique PID - process identifier.
ps - displays information about processes. Note that the ps command differs between
different LINUX systems - see the local ps man page for details.
To see your current shell's processes:
% ps
PID TTY TIME CMD
26450 pts/9 0:00 ps
66801 pts/9 0:00 -csh
To see a detailed list of all of your processes on a machine (current shell and all other
shells):
% ps uc
USER PID %CPU %MEM SZ RSS TTY STAT STIME TIME COMMAND
jsmith 26451 0.0 0.0 120 232 pts/9 R 21:01:14 0:00 ps
jsmith 43520 0.0 1.0 300 660 pts/76 S 19:18:31 0:00 elm
jsmith 66801 0.0 1.0 348 640 pts/9 S 20:49:20 0:00 csh
jsmith 112453 0.0 0.0 340 432 pts/76 S Mar 03 0:00 csh
% ps ug
USER PID %CPU %MEM SZ RSS TTY STAT STIME TIME COMMAND
root 0 0.0 0.0 8 8 - S Feb 08 32:57 swapper
root 1 0.1 0.0 252 188 - S Feb 08 39:16 /etc/init
root 514 72.6 0.0 12 8 - R Feb 08 28984:05 kproc
root 771 0.2 0.0 16 16 - S Feb 08 65:14 kproc
root 1028 0.0 0.0 16 16 - S Feb 08 0:00 kproc
{ lines deleted }
root 60010 0.0 0.0 1296 536 - S Mar 07 0:00 -ncd19:0
kdr 60647 0.0 0.0 288 392 pts/87 S Mar 06 0:00 -ksh
manfield 60968 0.0 0.0 268 200 - S 10:12:52 0:00 mwm
kelly 61334 0.0 0.0 424 640 - S 08:18:10 0:00 twm
sjw 61925 0.0 0.0 552 376 - S Mar 06 0:00 rlogin kanaha
mkm 62357 0.0 0.0 460 240 - S Feb 08 0:00 xterm
ishley 62637 0.0 0.0 324 152 pts/106 S Mar 06 0:00 xedit march2
tusciora 62998 0.0 0.0 340 448 - S Mar 06 0:05 xterm -e
dilfeath 63564 0.0 0.0 200 268 - S 07:32:45 0:00 xclock
tusciora 63878 0.0 0.0 548 412 - S Mar 06 0:41 twm
kill - use the kill command to send a signal to a process. In most cases, this will be a kill
signal, hence the command name. However, other types of signals are usually
supported. Note that you can only kill processes which you own. The command syntax
is:
kill [-signal] process_identifier(PID)
Examples:
You can also use CTRL-C to kill the currently running process.
Suspend a process: Use CTRL-Z.
Background a process: Normally, commands operate in the foreground - you can not do
additional work until the command completes. Backgrounding a command allows you
to continue working at the shell prompt.
To start a job in the background, use an ampersand (&) when you invoke the command:
myprog &
To put an already running job in the background, first suspend it with CRTL-Z and then
use the "bg" command:
Foreground a process: To move a background job to the foreground, find its "job"
number and then use the "fg" command. In this example, the jobs command shows that
two processes are running in the background. The fg command is used to bring the
second job (%2) to the foreground.
jobs
[1] + Running xcalc
[2] Running find / -name core -print
fg %2
Stop a job running in the background: Use the jobs command to find its job number, and
then use the stop command. You can then bring it to the foreground or restart execution
later.
jobs
[1] + Running xcalc
[2] Running find / -name core -print
stop %2
Kill a job running in the background, use the jobs command to find its job number, and
then use the kill command. Note that you can also use the ps and kill commands to
accomplish the same task.
jobs
[1] + Running xcalc
[2] Running find / -name core -print
kill %2
A program, or command, interacts with the kernel to provide the environment and perform the
functions called for by the user. A program can be: an executable shell file, known as a shell script; a
built-in shell command; or a source compiled, object code file.
The shell is a command line interpreter. The user interacts with the kernel through the shell. You can
write ASCII (text) scripts to be acted upon by a shell.
System programs are usually binary, having been compiled from C source code. These are located in
places like /bin, /usr/bin, /usr/local/bin, /usr/ucb, etc. They provide the functions that you normally
think of when you think of Linux. Some of these are sh, csh, date, who, more, and there are many
others.
crontab – Quick Reference
crontab is used to schedule task/jobs
cron meaning – There is no definitive explanation but most accepted answers is reportedly from
Ken Thompson ( author of unix cron ), name cron comes from chron ,the Greek prefix for
‘time.’.
What is cron ? – Cron is a daemon which runs at the times of system boot from /etc/init.d
scripts. If needed it can be stopped/started/restart using init script or with command service crond
start in Linux systems.
This document covers following aspects of Unix, Linux cron jobs to help you understand
and implement cronjobs successfully
1. What is crontab?
2. What is a cron job or cron schedule?
3. Crontab Restrictions
4. Crontab Commands
5. Crontab file – syntax
6. Crontab Example
7. Crontab Environment
8. Disable Email
9. Generate log file for crontab activity
10. Crontab file location
1. What is crontab?
Crontab (CRON TABle) is a file which contains the schedule of cron entries to be run and at
specified times. File location varies by operating systems, See Crontab file location at the end of
this document.
Cron job or cron schedule is a specific set of execution instructions specifing day, time and
command to execute. crontab can have multiple execution statments.
3. Crontab Restrictions
You can execute crontab if your name appears in the file /usr/lib/cron/cron.allow. If that file does
not exist, you can use
crontab if your name does not appear in the file /usr/lib/cron/cron.deny.
If only cron.deny exists and is empty, all users can use crontab. If neither file exists, only the root
user can use crontab. The allow/deny files consist of one user name per line.
4. Crontab Commands
5. Crontab file
Crontab syntax :
A crontab file has five fields for specifying day , date and time followed by the command to be
run at that interval.
* * * * * command to be executed
- - - - -
| | | | |
| | | | +----- day of week (0 - 6)
(Sunday=0)
| | | +------- month (1 - 12)
| | +--------- day of month (1 - 31)
| +----------- hour (0 - 23)
+------------- min (0 - 59)
* in the value field above means all legal values as in braces for that column.
The value column can have a * or a list of elements separated by commas. An element is either a
number in the ranges shown above or two numbers in the range separated by a hyphen (meaning
an inclusive range).
Notes
A. ) Repeat pattern like /2 for every 2 minutes or /10 for every 10 minutes is not supported by all
operating systems. If you try to use it and crontab complains it is probably not supported.
B.) The specification of days can be made in two fields: month day and weekday. If both are
specified in an entry, they are cumulative meaning both of the entries will get executed .
6. Crontab Examples
A line in crontab file like below removes the tmp files from /home/someuser/tmp each day at
6:30 PM.
30 18 * * * rm /home/someuser/tmp/*
Changing the parameter values as below will cause this command to run at different time
schedule below :
Note : If you inadvertently enter the crontab command with no argument(s), do not attempt to
get out with Control-d. This removes all entries in your crontab file. Instead, exit with Control-c.
7. Crontab Environment
cron invokes the command from the user’s HOME directory with the shell, (/usr/bin/sh).
cron supplies a default environment for every shell, defining:
HOME=user’s-home-directory
LOGNAME=user’s-login-id
PATH=/usr/bin:/usr/sbin:.
SHELL=/usr/bin/sh
Users who desire to have their .profile executed must explicitly do so in the crontab entry or in a
script called by the entry.
8. Disable Email
By default cron jobs sends a email to the user account executing the cronjob. If this is not needed
put the following command At the end of the cron job line .
>/dev/null 2>&1
Mac OS X
/usr/lib/cron/tabs/
BSD Unix
/var/cron/tabs/
Solaris, HP-UX, Debian, Ubuntu
/var/spool/cron/crontabs/
AIX, Red Hat Linux, CentOS, Ferdora
/var/spool/cron/
System Resources Commands:
Syntax
df [options] [resource]
Common Options
-l local file systems only (SVR4)
-k report in kilobytes (SVR4)
Syntax
du [options] [directory or file]
Common Options
-a display disk usage for each file, not just subdirectories
-s display a summary total only
-k report in kilobytes (SVR4)
Syntax
who [am i]
Examples
> who
wmtell ttyp1 Apr 21 20:15 (apple.acs.ohio-s)
fbwalk ttyp2 Apr 21 23:21 (worf.acs.ohio-st)
stwang ttyp3 Apr 21 23:22 (127.99.25.8)
Syntax
whereis [options] command(s)
Common Options
-b report binary files only
-m report manual sections only
-s report source files only
Examples
> whereis Mail
Mail: /usr/ucb/Mail /usr/lib/Mail.help /usr/lib/Mail.rc /usr/man/man1/Mail.1
> whereis -b Mail
Mail: /usr/ucb/Mail /usr/lib/Mail.help /usr/lib/Mail.rc
> whereis -m Mail
Mail: /usr/man/man1/Mail.1
Syntax
which command(s)
example--
> which Mail
/usr/ucb/Mail
uname has additional options to print information about system hardware type and software version.
date - current date and time
date displays the current data and time. A superuser can set the date and time.
Syntax
date [options] [+format]
Common Options
-u use Universal Time (or Greenwich Mean Time)
+format specify the output format
%a weekday abbreviation, Sun to Sat
%h month abbreviation, Jan to Dec
%j day of year, 001 to 366
%n <new-line>
%t <TAB>
%y last 2 digits of year, 00 to 99
%D MM/DD/YY date
%H hour, 00 to 23
%M minute, 00 to 59
%S second, 00 to 59
%T HH:MM:SS time
Examples
> date
Mon Jun 10 09:01:05 EDT 1996
> date -u
Mon Jun 10 13:01:33 GMT 1996
> date +%a%t%D
Mon 06/10/96
> date '+%y:%j'
96:162
Terminal Control Keys
Several key combinations on your keyboard usually have a special effect on the
terminal.
These "control" (CTRL) keys are accomplished by holding the CTRL key while typing
the second key. For example, CTRL-c means to hold the CTRL key while you type the
letter "c".
The most common control keys are listed below:
Know what is happening in “real time” on your systems is in my opinion the basis to use and
optimize your OS. The top command can help us, this is a very useful system monitor that is
really easy to use, and that can also allows us to understand why our OS suffers and which
process use most resources. The command to be run on the terminal is:
$ top
Let’s see now every single row of this output to explain all the information found within the
screen.
1° Row — top
2° Row – task
3° Row – cpu
The third line indicates how the cpu is used. If you sum up all the percentages the total will be
100% of the cpu. Let’s see what these values indicate in order:
The fourth and fifth rows respectively indicate the use of physical memory (RAM) and swap. In
this order: Total memory in use, free, buffers cached.
And as last thing ordered by CPU usage (as default) there are the processes currently in use.
Let’s see what information we can get in the different columns:
PID – l’ID of the process(4522)
USER – The user that is the owner of the process (root)
PR – priority of the process (15)
NI – The “NICE” value of the process (0)
VIRT – virtual memory used by the process (132m)
RES – physical memory used from the process (14m)
SHR – shared memory of the process (3204)
S – indicates the status of the process: S=sleep R=running Z=zombie (S)
%CPU – This is the percentage of CPU used by this process (0.3)
%MEM – This is the percentage of RAM used by the process (0.7)
TIME+ –This is the total time of activity of this process (0:17.75)
COMMAND – And this is the name of the process (bb_monitor.pl)
Conclusions
Now that we have seen in detail all the information that the command “top” returns, it will be
easier to understand the reason of excessive load and/or the slowing of the system
Recover/Reset Root Password
7 – Exit chroot
exit
8 - Reboot your system
reboot
SIGHUP - The SIGHUP signal disconnects a process from the parent process. This an also be
used to restart processes. For example, "killall -SIGUP compiz" will restart Compiz. This is useful
for daemons with memory leaks.
SIGINT - This signal is the same as pressing ctrl-c. On some systems, "delete" + "break" sends the
same signal to the process. The process is interrupted and stopped. However, the process can ignore
this signal.
SIGQUIT - This is like SIGINT with the ability to make the process produce a core dump.
SIGILL - When a process performs a faulty, forbidden, or unknown function, the system sends the
SIGILL signal to the process. This is the ILLegal SIGnal.
SIGTRAP - This signal is used for debugging purposes. When a process has performed an action or
a condition is met that a debugger is waiting for, this signal will be sent to the process.
SIGABRT - This kill signal is the abort signal. Typically, a process will initiate this kill signal on
itself.
SIGBUS - When a process is sent the SIGBUS signal, it is because the process caused a bus error.
Commonly, these bus errors are due to a process trying to use fake physical addresses or the process
has its memory alignment set incorrectly.
SIGFPE - Processes that divide by zero are killed using SIGFPE. Imagine if humans got the death
penalty for such math. NOTE: The author of this article was recently drug out to the street and shot
for dividing by zero.
SIGKILL - The SIGKILL signal forces the process to stop executing immediately. The program
cannot ignore this signal. This process does not get to clean-up either.
SIGUSR1 - This indicates a user-defined condition. This signal can be set by the user by
programming the commands in sigusr1.c. This requires the programmer to know C/C++.
SIGSEGV - When an application has a segmentation violation, this signal is sent to the process.
SIGPIPE - When a process tries to write to a pipe that lacks an end connected to a reader, this
signal is sent to the process. A reader is a process that reads data at the end of a pipe.
SIGALRM - SIGALRM is sent when the real time or clock time timer expires.
SIGTERM - This signal requests a process to stop running. This signal can be ignored. The process
is given time to gracefully shutdown. When a program gracefully shuts down, that means it is given
time to save its progress and release resources. In other words, it is not forced to stop. SIGINT is
very similar to SIGTERM.
SIGCHLD - When a parent process loses its child process, the parent process is sent the
SIGCHLD signal. This cleans up resources used by the child process. In computers, a child process
is a process started by another process know as a parent.
SIGCONT - To make processes continue executing after being paused by the SIGTSTP or
SIGSTOP signal, send the SIGCONT signal to the paused process. This is the CONTinue SIGnal.
This signal is beneficial to Unix job control (executing background tasks).
SIGSTOP - This signal makes the operating system pause a process's execution. The process cannot
ignore the signal.
SIGTSTP - This signal is like pressing ctrl-z. This makes a request to the terminal containing the
process to ask the process to stop temporarily. The process can ignore the request.
SIGTTIN - When a process attempts to read from a tty (computer terminal), the process receives
this signal.
SIGTTOU - When a process attempts to write from a tty (computer terminal), the process receives
this signal.
SIGURG - When a process has urgent data to be read or the data is very large, the SIGURG signal
is sent to the process.
SIGXCPU - When a process uses the CPU past the allotted time, the system sends the process this
signal. SIGXCPU acts like a warning; the process has time to save the progress (if possible) and
close before the system kills the process with SIGKILL.
SIGXFSZ - Filesystems have a limit to how large a file can be made. When a program tries to
violate this limit, the system will send that process the SIGXFSZ signal.
SIGVTALRM - SIGVTALRM is sent when CPU time used by the process elapses.
SIGPROF - SIGPROF is sent when CPU time used by the process and by the system on behalf of
the process elapses.
SIGWINCH - When a process is in a terminal that changes its size, the process receives this signal.
SIGIO - Alias to SIGPOLL or at least behaves much like SIGPOLL.
SIGPWR - Power failures will cause the system to send this signal to processes (if the system is still
on).
SIGSYS - Processes that give a system call an invalid parameter will receive this signal.
SIGRTMIN* - This is a set of signals that varies between systems. They are labeled
SIGRTMIN+1, SIGRTMIN+2, SIGRTMIN+3, ......., and so on (usually up to 15). These are user-
defined signals; they must be programmed in the Linux kernel's source code. That would require the
user to know C/C++.
SIGRTMAX* - This is a set of signals that varies between systems. They are labeled SIGRTMAX-
1, SIGRTMAX-2, SIGRTMAX-3, ......., and so on (usually up to 14). These are user-defined signals;
they must be programmed in the Linux kernel's source code. That would require the user to know
C/C++.
SIGINFO - Terminals may sometimes send status requests to processes. When this happens,
processes will also receive this signal.
SIGLOST - Processes trying to access locked files will get this signal.
SIGPOLL - When a process causes an asynchronous I/O event, that process is sent the SIGPOLL
signal.
UNIX Kernel:
Technically speaking, the UNIX kernel "is" the operating system. It provides the basic full time software
connection to the hardware. By full time, it means that the kernel is always running while the computer
is turned on. When a system boots up, kernel is loaded. Likewise, the kernel is only exited when the
computer is turned off.
The UNIX kernel is built specifically for a machine when it is installed. It has a record of all the pieces of
hardware it needs to talk to and knows what languages they speak (how to turn switches on and off to
get a desired result). Thus, a kernel is not easily ported to another computer. Each individual computer
will have its own tailor- made kernel. If the computer's hardware configuration changes during its life,
the kernel must be "rebuilt" (told about the new pieces of hardware).
However, though the connection between the kernel and the hardware is "hardcoded" to a specific
machine, the connection between the user and the kernel is generic. That is the beauty of the UNIX
kernel. From your perspective, regardless of how the kernel interacts with the hardware, no matter which
UNIX computer you use, you will have the same kernel interface to work with. That is because the
hardware is "hidden" by the kernel.
The kernel also handles memory management, input and output requests, and process scheduling for
time-shared operations (we'll talk more about what this means later).
To help it with its work, the kernel also executes daemon programs which stay alive as long as the
machine is turned on and help perform tasks such as printing or serving web documents.
However, the task of hiding the hardware is a pretty much full time job for the kernel. As such, it does
not have too much time to provide for a fancy user-friendly interface. Thus, though the kernel is much
easier to talk to than the hardware, the language of the kernel is still pretty cryptic.
Fortunately, the UNIX operating system has built in "shells" which wrap around the kernel and provide a
much more user-friendly interface. Let's take a look at shells.
Shells
The shell sits between you and the kernel, acting as a command interpreter. It reads your terminal input
and translates the commands into actions taken by the system. The shell is analogous to command in
DOS. When you log into the system you are given a default shell. When the shell starts up it reads its
startup files and may set environment variables, command search paths, and command aliases, and
executes any commands specified in these files.
The original shell was the Bourne shell, sh. Every Linux platform will either have the Bourne shell, or a
Bourne compatible shell available. It has very good features for controlling input and output, but is not
well suited for the interactive user. To meet the latter need the C shell, csh, was written and is now found
on most, but not all, Linux systems. It uses C type syntax, the language Unix is written in, but has a more
awkward input/output implementation. It has job control, so that you can reattach a job running in the
background to the foreground. It also provides a history feature which allows you to modify and repeat
previously executed commands.
The default prompt for the Bourne shell is $ (or #, for the root user). The default prompt for C shell is %.
Numerous other shells are available from the network. Almost all of them are based on either sh or csh
with extensions to provide job control to sh, allow in-line editing of commands, page through previously
executed commands, provide command name completion and custom prompt, etc. Some of the more
well known of these may be on your favorite Linux system: the Korn shell, ksh, by David Korn and the
Bourne Again Shell, bash, from the Free Software Foundations GNU project, both based on sh, the T-C
shell, tcsh, and the extended C shell, cshe, both based on csh. Below we will describe some of the features
of sh and csh so that you can get started.
Built-in Commands
The shells have a number of built-in, or native commands. These commands are executed directly in the
shell and don’t have to call another program to be run. These built-in commands are different for the
different shells.
sh
For the Bourne shell some of the more commonly used built-in commands are:
: null command
. source (read and execute) commands from a file
case case conditional loop
cd change the working directory (default is $HOME)
echo write a string to standard output
eval evaluate the given arguments and feed the result back to the shell
exec execute the given command, replacing the current shell
exit exit the current shell
export share the specified environment variable with subsequent shells
for for conditional loop
if if conditional loop
pwd print the current working directory
read read a line of input from stdin
set set variables for the shell
test evaluate an expression as true or false
trap trap for a typed signal and execute commands
umask set a default file permission mask for new files
unset unset shell variables
wait wait for a specified process to terminate
while while conditional loop
csh
For the C shell the more commonly used built-in functions are:
alias assign a name to a function
bg put a job into the background
cd change the current working directory
echo write a string to stdout
eval evaluate the given arguments and feed the result back to the shell
exec execute the given command, replacing the current shell
exit exit the current shell
fg bring a job to the foreground
foreach for conditional loop
glob do filename expansion on the list, but no "\" escapes are honored
history print the command history of the shell
if if conditional loop
jobs list or control active jobs
kill kill the specified process
limit set limits on system resources
logout terminate the login shell
nice command lower the scheduling priority of the process, command
nohup command do not terminate command when the shell exits
set set a shell variable
setenv set an environment variable for this and subsequent shells
stop stop the specified background job
umask set a default file permission mask for new files
unalias remove the specified alias name
unset unset shell variables
while while conditional loop
Environment Variables
Environmental variables are used to provide information to the programs you use. You can have both
global environment and local shell variables. Global environment variables are set by your login
shell and new programs and shells inherit the environment of their parent shell. Local shell variables
are used only by that shell and are not passed on to other processes. A child process cannot pass a
variable back to its parent process.
The current environment variables are displayed with the "env" or "printenv" commands. Some
common ones are:
Many environment variables will be set automatically when you login. You can modify them or define
others with entries in your startup files or at any time within the shell. Some variables you might want
to change are PATH and DISPLAY. The PATH variable specifies the directories to be automatically
searched for the command you specify. Examples of this are in the shell startup scripts below.
You set a global environment variable with a command similar to the following for the C shell:
% setenv NAME value
and for Bourne shell:
$ NAME=value; export NAME
You can list your global environmental variables with the env or printenv commands. You unset them
with the unsetenv (C shell) or unset (Bourne shell) commands.
To set a local shell variable use the set command with the syntax below for C shell. Without options
set displays all the local variables.
% set name=value
For the Bourne shell set the variable with the syntax:
$ name=value
The current value of the variable is accessed via the "$name", or "${name}", notation
Shell
Whenever you login to a Linux system you are placed in a shell program. The shell's prompt is
usually visible at the cursor's position on your screen. To get your work done, you enter
commands at this prompt.
The shell is a command interpreter; it takes each command and passes it to the operating
system kernel to be acted upon. It then displays the results of this operation on your screen.
Several shells are usually available on any UNIX system, each with its own strengths and
weaknesses.
Different users may use different shells. Initially, your system administrator will supply a
default shell, which can be overridden or changed. The most commonly available shells are:
Bourne shell (sh)
C shell (csh)
Korn shell (ksh)
TC Shell (tcsh)
Bourne Again Shell (bash)
Each shell also includes its own programming language. Command files, called "shell scripts"
are used to accomplish a series of tasks.
$ chmod +x simple
After that, you can execute the script by specifying the filename as an argument to the bash
command:
$ bash simple
You can also execute scripts by just typing its name alone. However, for that method to work,
the directory containing the script must be defined in your PATH variable. When looking at
your .profile earlier in the course, you may have noticed that the PATH=$PATH:$HOME
definition was already in place. This enables you to run scripts located in your home directory
($HOME) without using the ksh command. For instance, because of that pre-defined PATH
variable, the simple script can be run from the command line like this:
$ simple
(For the purposes of this course, we'll simplify things by running all scripts by their script name
only, not as an argument to the ksh command.)
You can also invoke the script from your current shell by opening a background subprocess - or
subshell - where the actual command processing will occur. You won't see it running, but it will
free up your existing shell so you can continue working. This is really only necessary when
running long, processing-intensive scripts that would otherwise take over your current shell
until they complete.
To run the script you created in the background, invoke it this way:
$ simple &
When the script completes, you'll see output similar to this in the current shell:
It is important to understand that Korn shell scripts run in a somewhat different way than they
would in other shells. Specifically, variables defined in the Korn shell aren't understood outside
of the defining - or parent - shell. They must be explicitly exported from the parent shell to work
in a subsequent script or subshell. If you use the export or typeset -x commands to make the
variable known outside the parent shell, any subshell will automatically inherit the values you
defined for the parent shell.
For example, here's a script named lookmeup that does nothing more than print a line to
standard output using the myaddress (defined as 123 Anystreet USA) variable:
$ cat lookmeup
print "I live at $myaddress"
If you open a new shell (using the ksh command) from the parent shell and run the script, you
see that myaddress is undefined:
$ ksh
$ lookmeup
I live at
$ exit
$ export myaddress
and then open a new shell and run the lookmeup script again, the variable is now defined:
$ ksh
$ lookmeup
I live at 123 Anystreet USA
To illustrate further how the parent shell takes processing precedence, let's change the value of
myaddress in the subshell:
$ myaddress='Houston, Texas'
$ print $myaddress
Houston, Texas
Now, if you exit the new shell and go back to the parent shell and type the same command:
$ exit
$ print $myaddress
123 Anystreet USA
you see that the original value in the parent shell was not affected by what you did in the
subshell.
A way to export variables automatically is to use the set -o allexport command. This command
cannot export variables to the parent shell from a subshell, but can export variables created in
the parent shell to all subshells created after the command is run. Likewise, it can automatically
export variables created in subshells to new subshells created after running the command. set -o
allexport is a handy command to place in your .kshrc file.
Any Korn shell script should contain this line at the very beginning:
#!/usr/bin/ksh
As you probably already know, the # sign marks anything that follows it on the line as a
comment - anything coming after it won't be interpreted or processed as part of the script. But,
when the # character is followed by a ! (commonly called "bang"), the meaning changes. The line
above specifies that the Korn shell will be (or should be) executing the script. If nothing is
specified, the system will attempt to execute the script using whatever its default shell type is,
not necessarily a Korn shell. Since the Korn shell supports some commands that other shells do
not, this can sometimes cause a problem. To be valid, this line must be on the very first line of
the script.
Shell scripts are often used to automate day-to-day tasks. For example, a system administrator
might use the following script, named diskuse here, to keep track of disk space usage:
#!/usr/bin/ksh
# diskuse
# Shows disk usage in blocks for /home
cd /var/log
cp disk.log disk.log.0
cd /home
du -sk * > /var/log/disk.log
cat /var/log/disk.log
Shown again - but this time with annotation - the script's processing steps are clear:
#!/usr/bin/ksh
# SCRIPT NAME: diskuse
# SCRIPT PURPOSE: Shows disk usage in blocks for /home
It's not a good idea to hard-code pathnames into your scripts like we did in the previous
example. We specified /var/log as the target directory several times, but what if the location of
the files changed? In a short script like this one, the impact is not great. However, some scripts
can be hundreds of lines long, creating a maintenance headache if files are moved. A way
around this is to create a variable to take the place of the full pathname, such as:
LOGDIR=/var/log
cp disk.log disk.log.0
to:
cp ${LOGDIR}/disk.log ${LOGDIR}/disk.log.0
Then, if the locations of disk.log changes in the future, you would only have to change the
variable definition to update the script. Also note that since you are defining the pathname with
the LOGDIR variable, the cd /var/log line in the script is unnecessary. Likewise, the du -sk * >
/var/log/disk.log and cat /var/log/disk.log lines can substitute ${LOGDIR} for /var/log.
Basic Shell Scripts:
Output to screen
#!/bin/bash
# Simple output script
Defining Tasks
#!/bin/bash
# Define small tasks
whoami
echo
pwd
echo
hostname
echo
ls -ltr
echo
Defining variables
#!/bin/bash
# Example of defining variables
a=Imran
b=Afzal
c=’Linux class’
Read Input
#!/bin/bash
# Read user input
echo Hello $a $b
#!/bin/bash
# Script to run commands within
clear
echo "Hello `whoami`"
echo
echo "Today is `date`"
echo
echo "Number of user login: `who | wc -l `"
echo
#!/bin/bash
# This script will rename a file
mv $oldfilename $newfilename
echo The file has been renamed as $newfilename
for loop Scripts:
#!/bin/bash
for i in 1 2 3 4 5
do
echo "Welcome $i times"
done
#!/bin/bash
#!/bin/bash
for i in {1..5}
do
touch $i
done
#!/bin/bash
for i in {1..5}
do
rm $i
done
#!/bin/bash
i=1
for day in Mon Tue Wed Thu Fri
do
echo "Weekday $((i++)) : $day"
done
#!/bin/bash
i=1
for username in `awk -F: '{print $1}' /etc/passwd`
do
echo "Username $((i++)) : $username"
done
do-while Script
#!/bin/bash
c=1
while [ $c -le 5 ]
do
echo "Welcone $c times"
(( c++ ))
done
#!/bin/bash
count=0
num=10
while [ $count -lt 10 ]
do
echo
echo $num seconds left to stop this process $1
echo
sleep 1
num=`expr $num - 1`
count=`expr $count + 1`
done
echo
echo $1 process is stopped!!!
echo
If-then Scripts:
#!/bin/bash
count=100
if [ $count -eq 100 ]
then
echo Count is 100
else
echo Count is not 100
fi
#!/bin/bash
clear
if [ -e /home/iafzal/error.txt ]
then
echo "File exist"
else
echo "File does not exist"
fi
#!/bin/bash
if [ "$a" == Mon ]
then
echo Today is $a
else
echo Today is not Monday
fi
Check the response and then output
#!/bin/bash
clear
echo
echo "What is your name?"
echo
read a
echo
if [ "$Like" == y ]
then
echo You are cool
elif [ "$Like" == n ]
then
echo You should try IT, it’s a good field
echo
fi
Other If statements
Test if the error.txt file exist and its size is greater than zero
if test -s error.txt
Comparisons:
-eq equal to for numbers
== equal to for letters
-ne not equal to
!== not equal to for letters
-lt less than
-le less than or equal to
-gt greater than
-ge greater than or equal to
File Operations:
-s file exists and is not empty
-f file exists and is not a directory
-d directory exists
-x file is executable
-w file is writable
-r file is readable
case Scripts:
#!/bin/bash
echo
echo Please chose one of the options below
echo
echo 'a = Display Date and Time'
echo 'b = List file and directories'
echo 'c = List users logged in'
echo 'd = Check System uptime'
echo
read choices
case $choices in
a) date;;
b) ls;;
c) who;;
d) uptime;;
*) echo Invalid choice - Bye.
esac
This script will look at your current day and tell you the state of the
backup
#!/bin/bash
NOW=$(date +"%a")
case $NOW in
Mon)
echo "Full backup";;
Tue|Wed|Thu|Fri)
echo "Partial backup";;
Sat|Sun)
echo "No backup";;
*) ;;
esac
Aliases
The alias command allows you to define new commands. Useful for creating shortcuts
for longer commands. The syntax is.
alias alias-name=executed_command
Some examples:
alias m=more
alias rm="rm -i"
alias h="history -r | more"
A system can be booted into (i.e., started up into) any of several runlevels, each of which is
represented by a single digit integer. Each runlevel designates a different system configuration
and allows access to a different combination of processes (i.e., instances of executing programs).
The are differences in the runlevels according to the operating system. Seven runlevels are
supported in the standard Linux kernel (i.e., core of the operating system). They are:
3 - Multiple users, command line (i.e., all-text mode) interface; the standard runlevel for most
Linux-based server hardware.
4 - User-definable
5 - Multiple users, GUI (graphical user interface); the standard runlevel for most Linux-based
desktop systems.
By default Linux boots either to runlevel 3 or to runlevel 5. The former permits the system to
run all services except for a GUI. The latter allows all services including a GUI.
In addition to the standard runlevels, users can modify the preset runlevels or even create new
ones if desired. Runlevels 2 and 4 are usually used for user defined runlevels.
The program responsible for altering the runlevel is init, and it can be called using the telinit
command. For example, changing from runlevel 3 to runlevel 5, which allows the GUI to be
started, can be accomplished by the root (i.e., administrative) user by issuing the following
command:
telinit 5
Booting into a different runlevel can help solve certain problems. For example, if a change made
in the X Window System configuration on a machine that has been set up to boot into a GUI has
rendered the system unusable, it is possible to temporarily boot into a console (i.e., all-text
mode) runlevel (i.e., runlevels 3 or 1) in order to repair the error and then reboot into the GUI.
The X Window System is a widely used system for managing GUIs on single computers and on
networks of computers.
Likewise, if a machine will not boot due to a damaged configuration file or will not allow
logging in because of a corrupted /etc/passwd file (which stores user names and other data
about users) or because of a forgotten password, the problem can solved by first booting into
single-user mode (i.e. runlevel 1).
The runlevel command can be used to find both the current runlevel and the previous runlevel
by merely typing the following and pressing the Enter key:
/sbin/runlevel
The runlevel executable file (i.e., the ready-to-run form of the program) is typically located in
the /sbin directory, which contains mostly administrative tools and which by default is not in
the user's PATH (i.e., the list of directories in which the system searches for programs). Thus, it
is usually necessary to type the full path of the command as shown above rather than just the
name of the command itself.
The default runlevel for a system is specified in the /etc/inittab file, which will contain an entry
such as id:3:initdefault: if the system starts in runlevel 3, or id:5:initdefault: if it starts in runlevel
5. This file can be easily (and safely) read with a command such as cat, i.e.,
cat /etc/inittab
As an alternative to telinit, the runlevel into which the system boots can be changed by
modifying /etc/inittab manually with a text editor. However, it is generally easier and safer (i.e.,
less chance of accidental damage to the file) to use telinit. It is always wise to make a backup
copy of /etc/inittab or any other configuration file before attempting to modify it manually.
Partitioning a Disk
Linux
# fdisk /dev/emcpowerp OR fdisk /dev/sdb
m n p 1 enter enter w
e.g:
# mkdir /rocket
# cd /rocket
# mkdir IFMX_ROCKET
# mkdir ROCKET_DATA
Add these entries to /etc/fstab file so the system can mount on boot up
# cp /etc/fstab /etc/fstab.bak
Verify = df –h
To extend filesystem of a Linux VM using LVM
If M < 2
then S = M *2
Else S=M+2
Commands
dd
mkswap
swapon or swapoff
The following dd command example creates a swap file with the name “newswap” under /
directory with a size of 1024MB (1.0GB).
Change the permission of the swap file so that only root can access it.
# chmod go-r /newswap OR
# chmod 0600 /newswap
To make this swap file available as a swap area even after the reboot, add the following line to
the /etc/fstab file.
# cat /etc/fstab
/newswap swap swap defaults 0 0
Verify whether the newly created swap area is available for your use.
# swapon –s
# free –h
If you don’t want to reboot to verify whether the system takes all the swap space mentioned in
the /etc/fstab, you can do the following, which will disable and enable all the swap partition
mentioned in the /etc/fstab
# swapoff -a
# swapon -a
Overview of systemd for RHEL 7
The systemd system and service manager is responsible for controlling how services are started,
stopped and otherwise managed on Red Hat Enterprise Linux 7 systems. By offering on-demand
service start-up and better transactional dependency controls, systemd dramatically reduces start
up times. As a systemd user, you can prioritize critical services over less important services.
Although the systemd process replaces the init process (quite literally, /sbin/init is now a
symbolic link to /usr/lib/systemd/systemd) for starting services at boot time and changing
runlevels, systemd provides much more control than the init process does while still supporting
existing init scripts. Here are some examples of the features of systemd:
Logging: From the moment that the initial RAM disk is mounted to start the Linux kernel
to final shutdown of the system, all log messages are stored by the new systemd journal.
Before the systemd journal existed, initial boot messages were lost, requiring that you try
to watch the screen as messages scrolled by to debug boot problems.
Now, all system messages come in on a single stream and are stored in the /run directory.
Messages can then be consumed by the rsyslog facility (and redirected to traditional log
files in the /var/log directory or to remote log servers) or displayed using the journalctl
command across a variety of attributes.
Dependencies: With systemd, an explicit set of dependencies can be defined for each
service, instead of being implied by boot order. This allows a service to start at any point
that its dependencies are met. In this way, many services can start at the same time,
making the boot process faster. Likewise, complex sets of dependencies can be set up, so
the exact requirements of a service (such as storage availability or file system checking)
can be met before a service starts.
Cgroups: Services are identified by Cgroups, which allow every component of a service
to be managed. For example, the older System V init scripts would start a service by
launching a process which itself might start other child processes. When the service was
killed, it was hoped that the parent process would do the right thing and kill its children.
By using Cgroups, all components of a service have a tag that can be used to make sure
that all of those components are properly started or stopped.
Activating services: Services don't just have to be always running or not running based
on runlevel, as they were previous to systemd. Services can now be activated based on
path, socket, bus, timer, or hardware activation. Likewise, because systemd can set up
sockets, if a process handling communications goes away, the process that starts up in its
place can pick up the next message from the socket. To the clients using the service, it
can look as though the service continued without interruption.
More than services: Instead of just managing services, systemd can manage several
different unit types. These unit types include:
o Devices: Create and use devices.
o Mounts and automounts: Mount file systems upon request or automount a file
system based on a request for a file or directory within that file system.
o Paths: Check the existence of files or directories or create them as needed.
o Services: Start a service, which often means launching a service daemon and
related components.
o Slices: Divide up computer resources (such as CPU and memory) and apply them
to selected units.
o Snapshots: Take snapshots of the current state of the system.
o Sockets: Set up sockets to allow communication paths to processes that can
remain in place, even if the underlying process needs to restart.
o Swaps: Create and use swap files or swap partitions.
o Targets: Manage a set of services under a single unit, represented by a target
name rather than a runlevel number.
o Timers: Trigger actions based on a timer.
Resource management
o The fact that each systemd unit is always associated with its own cgroup lets you
control the amount of resources each service can use. For example, you can set a
percent of CPU usage by service which can put a cap on the total amount of CPU
that service can use -- in other words, spinning off more processes won't allow
more resources to be consumed by the service. Prior to systemd, nice levels were
often used to prevent processes from hogging precious CPU time. With systemd's
use of cgroups, precise limits can be set on CPU and memory usage, as well as
other resources.
o A feature called slices lets you slice up many different types of system resources
and assign them to users, services, virtual machines, and other units. Accounting
is also done on these resources, which can allow you to charge customers for their
resource usage.
Although there is not a strict order in which services are started when a RHEL 7 (systemd)
system is booted, there is a structure to the boot process. The direction that the systemd process
takes at boot time depends on the default.target file. A long listing of the default.target file
shows you which target starts when the system boots:
# cd /etc/systemd/system
# ls -l default.target
lrwxrwxrwx. 1 root root 16 Aug 23 19:18 default.target ->
/lib/systemd/system/graphical.target
You can see here that the graphical.target (common for desktop systems or servers with
graphical interfaces) is set as the default.target (via a symbolic link). To understand what
targets, services and other units start up with the graphical target, it helps to work backwards, as
systemd does, to build the dependency tree. Here's what to look for:
graphical.target: The /lib/systemd/system/graphical.target file includes these lines:
Requires=multi-user.target
Wants=display-manager.service
Conflicts=rescue.service rescue.target
After=multi-user.target rescue.service rescue.target display-
manager.service
AllowIsolate=yes
This tells systemd to start everything in the multi-user.target before starting the graphical
target. Once that's done, the "Wants" entry tells systemd to start the display-
manager.service service (/etc/systemd/system/display-manager.service), which runs
the GNOME display manager (/usr/sbin/gdm).
Requires=basic.target
# cd /etc/systemd/system/multi-user.target.wants
abrt-ccpp.service hypervkvpd.service postfix.service
abrtd.service hypervvssd.service remote-fs.target
abrt-oops.service irqbalance.service rhsmcertd.service
abrt-vmcore.service ksm.service rngd.service
abrt-xorg.service ksmtuned.service rpcbind.service
atd.service libstoragemgmt.service rsyslog.service
auditd.service libvirtd.service smartd.service
avahi-daemon.service mdmonitor.service sshd.service
chronyd.service ModemManager.service sysstat.service
crond.service netcf-transaction.service tuned.service
cups.path nfs.target vmtoolsd.service
Requires=sysinit.target
Wants=local-fs.target swap.target
Besides mounting file systems and enabling swap devices, the sysinit.target starts targets,
services, and mounts based on units contained in the
/usr/lib/systemd/system/sysinit.target.wants directory. These units enable logging, set
kernel options, start the udevd daemon to detect hardware, and allow file system
decryption, among other things. The /etc/systemd/system/sysinit.target.wants directory
contains services that start iSCSI, multipath, LVM monitoring and RAID services.
local-fs.target: The local-fs.target is set to run after the local-fs-pre.target target, based
on this line:
After=local-fs-pre.target
There are no services associated with the local-fs-pre.target target (you could add some to
a "wants" directory if you like). However, units in the /usr/lib/systemd/system/local-
fs.target.wants directory import the network configuration from the initramfs, run a file
system check (fsck) on the root file system when necessary, and remounting the root file
system (and special kernel file systems) based on the contents of the /etc/fstab file.
Although the boot process is built by systemd in the order just shown, it actually runs, in general,
in the opposite order. As a rule, a target on which another target is dependent must be running
before the units in the first target can start. To see more details about the boot process, see the
bootup man page (man 7 bootup).
Checking service status: To check the status of a service (for example, nfs-
server.service), type the following:
# systemctl status nfs-server.service
nfs-server.service - NFS Server
Loaded: loaded (/usr/lib/systemd/system/nfs-server.service;
disabled)
Active: active (exited) since Wed 2014-03-19 10:29:40 MDT; 57s ago
Process: 5206 ExecStartPost=/usr/libexec/nfs-utils/scripts/nfs-
server.postconfig (code=exited, status=0/SUCCESS)
Process: 5191 ExecStart=/usr/sbin/rpc.nfsd $RPCNFSDARGS $RPCNFSDCOUNT
(code=exited, status=0/SUCCESS)
Process: 5188 ExecStartPre=/usr/sbin/exportfs -r (code=exited,
status=0/SUCCESS)
Process: 5187 ExecStartPre=/usr/libexec/nfs-utils/scripts/nfs-
server.preconfig (code=exited, status=0/SUCCESS)
Main PID: 5191 (code=exited, status=0/SUCCESS)
CGroup: /system.slice/nfs-server.service
Mar 19 10:29:40 localhost.localdomain systemd[1]: Starting NFS
Server...
Mar 19 10:29:40 localhost.localdomain systemd[1]: Started NFS Server.
Stopping a service: To stop a service, use the stop option as follows:
# systemctl stop nfs-server.service
Starting a service: To start a service, use the start option as follows:
# systemctl start nfs-server.service
Enabling a service: To enable a service so it starts automatically at boot time, type the
following:
# systemctl enable nfs-server.service
Disable a service: To disable a service so it doesn't start automatically at boot time, type
the following:
# systemctl disable nfs-server.service
Listing dependencies: To see dependencies of a service, use the list-dependencies
option, as follows:
# systemctl list-dependencies nfs-server.service
nfs-server.service
├─nfs-idmap.service
├─nfs-mountd.service
├─nfs-rquotad.service
├─proc-fs-nfsd.mount
├─rpcbind.service
├─system.slice
├─var-lib-nfs-rpc_pipefs.mount
└─basic.target
├─alsa-restore.service
├─alsa-state.service
...
Listing units in targets: To see what services and other units (service, mount, path,
socket, and so on) are associated with a particular target, type the following:
# systemctl list-dependencies multi-user.target
multi-user.target
├─abrt-ccpp.service
├─abrt-oops.service
├─abrt-vmcore.service
├─abrt-xorg.service
├─abrtd.service
├─atd.service
├─auditd.service
├─avahi-daemon.service
├─brandbot.path
├─chronyd.service
├─crond.service
...
List specific types of units: Use the following command to list specific types of units (in
these examples, service and mount unit types):
# systemctl list-units --type service
UNIT LOAD ACTIVE SUB DESCRIPTION
abrt-ccpp.service loaded active exited Install ABRT
coredump hook
abrt-oops.service loaded active running ABRT kernel log
watcher
abrt-xorg.service loaded active running ABRT Xorg log
watcher
abrtd.service loaded active running ABRT Automated Bug
Reporting
accounts-daemon.service loaded active running Accounts Service
...
# systemctl list-units --type mount
UNIT LOAD ACTIVE SUB DESCRIPTION
-.mount loaded active mounted /
boot.mount loaded active mounted /boot
dev-hugepages.mount loaded active mounted Huge Pages File
System
dev-mqueue.mount loaded active mounted POSIX Message Queue
File Syst
mnt-repo.mount loaded active mounted /mnt/repo
proc-fs-nfsd.mount loaded active mounted RPC Pipe File System
run-user-1000-gvfs.mount loaded active mounted /run/user/1000/gvfs
...
Listing all units: To list all units installed on the system, along with their current states,
type the following:
# systemctl list-unit-files
UNIT FILE STATE
proc-sys-fs-binfmt_misc.automount static
dev-hugepages.mount static
dev-mqueue.mount static
proc-sys-fs-binfmt_misc.mount static
...
arp-ethers.service disabled
atd.service enabled
auditd.service enabled
...
View service processes with systemd-cgtop: To view processes associated with a
particular service (cgroup), you can use the systemd-cgtop command. Like the top
command (which sorts processes by such things as CPU and memory usage), systemd-
cgtop lists running processes based on their service (cgroup label). Once systemd-cgtop
is running, you can press keys to sort by memory (m), CPU (c), task (t), path (p), or I/O
load (i). Here is an example:
# systemd-cgtop
Recursively view cgroup contents: To output a recursive list of cgroup content, use the
systemd-cgls command:
# systemd-cgls
├─user.slice
│ ├─user-1000.slice
│ │ ├─session-5.scope
│ │ │ ├─2661 gdm-session-worker [pam/gdm-password]
│ │ │ ├─2672 /usr/bin/gnome-keyring-daemon --daemonize --login
│ │ │ ├─2674 gnome-session --session gnome-classic
│ │ │ ├─2682 dbus-launch --sh-syntax --exit-with-session
│ │ │ ├─2683 /bin/dbus-daemon --fork --print-pid 4 --print-address 6 --
session
│ │ │ ├─2748 /usr/libexec/gvfsd
...
View journal (log) files: Using the journalctl command you can view messages from
the systemd journal. Using different options you can select which group of messages to
display. The journalctl command also supports tab completion to fill in fields for which
to search. Here are some examples:
# journalctl -h View help for the command
# journalctl -k View kernel messages from current boot
# journalctl -f Follow journal messages (like tail -f)
# journalctl -u NetworkManager View messages for specific unit (can
tab complete)
Here are some details of how systemd compares to pre-RHEL 7 init and related commands:
System startup: The systemd process is the first process ID (PID 1) to run on RHEL 7
system. It initializes the system and launches all the services that were once started by the
traditional init process.
Managing system services: For RHEL 7, the systemctl command replaces service and
chkconfig. Prior to RHEL 7, once RHEL was up and running, the service command was
used to start and stop services immediately. The chkconfig command was used to
identify at which run levels a service would start or stop automatically.
Although you can still use the service and chkconfig commands to start/stop and
enable/disable services, respectively, they are not 100% compatible with the RHEL 7
systemctl command. For example, non-standard service options, such as those that start
databases or check configuration files, may not be supported in the same way for RHEL 7
services.
Changing runlevels: Prior to RHEL 7, runlevels were used to identify a set of services
that would start or stop when that runlevel was requested. Instead of runlevels, systemd
uses the concept of targets to group together sets of services that are started or stopped. A
target can also include other targets (for example, the multi-user target includes an nfs
target).
There are systemd targets that align with the earlier runlevels. However the point of
targets is not to necessarily imply a level of activity (for example, runlevel 3 implied
more services were active than runlevel 1). Instead targets just represent a group of
services, so it's appropriate that there are many more targets available than there are
runlevels. The following list shows how systemd targets align with traditional runlevels:
Traditional runlevel New target name Symbolically linked to...
Runlevel 0 | runlevel0.target -> poweroff.target
Runlevel 1 | runlevel1.target -> rescue.target
Runlevel 2 | runlevel2.target -> multi-user.target
Runlevel 3 | runlevel3.target -> multi-user.target
Runlevel 4 | runlevel4.target -> multi-user.target
Runlevel 5 | runlevel5.target -> graphical.target
Runlevel 6 | runlevel6.target -> reboot.target
Default runlevel: The default runlevel (previously set in the /etc/inittab file) is now
replaced by a default target. The location of the default target is
/etc/systemd/system/default.target, which by default is linked to the multi-user target.
Location of services: Before systemd, services were stored as scripts in the /etc/init.d
directory, then linked to different runlevel directories (such as /etc/rc3.d, /etc/rc5.d, and
so on). Services with systemd are named something.service, such as firewalld.service,
and are stored in /lib/systemd/system and /etc/systemd/system directories. Think of the
/lib files as being more permanent and the /etc files as the place you can modify
configurations as needed.
When you enable a service in RHEL 7, the service file is linked to a file in the
/etc/systemd/system/multi-user.target.wants directory. For example, if you run
systemctl enable fcoe.service a symbolic link is created from
/etc/systemd/system/multi-user.target.wants/fcoe.service that points to
/lib/systemd/system/fcoe.service to cause the fcoe.service to start at boot time.
Also, the older System V init scripts were actual shell scripts. The systemd files tasked to
do the same job are more like .ini files that contain the information needed to launch a
service.
Configuration files: The /etc/inittab file was used by the init process in RHEL 6 and
earlier to point to the initialization files (such as /etc/rc.sysinit) and runlevel service
directories (such as /etc/rc5.d) needed to start up the system. Changes to those services
was done in files (usually named after the service) in the /etc/sysconfig directory. For
systemd in RHEL 7, there are still files in /etc/sysconfig used to modify how services
behave. However, services can be modified by adding files to the /etc/systemd directory
to override the permanent service files in the /lib/systemd directories.
Transitioning to systemd
If you are used to using the init process and System V init scripts prior to RHEL 7, there are a
few things you should know about transitioning to systemd:
Using RHEL 6 commands: For the time being, you can use commands such as service,
chkconfig, runlevel, and init as you did in RHEL 6. They will cause appropriate systemd
commands to run, with similar, if not exactly the same, results. Here are some examples:
# service cups restart
Redirecting to /bin/systemctl restart cups.service
# chkconfig cups on
Note: Forwarding request to 'systemctl enable cups.service'.
System V init Scripts: Although not encouraged, System V init scripts are still
supported. There are still some services in RHEL 7 that are implemented in System V init
scripts. To see System V init scripts that are available on your system and the runlevels
on which they start, use the chkconfig command as follows:
# chkconfig --list
...
iprdump 0:off 1:off 2:on 3:on 4:on 5:on 6:off
iprinit 0:off 1:off 2:on 3:on 4:on 5:on 6:off
iprupdate 0:off 1:off 2:on 3:on 4:on 5:on 6:off
netconsole 0:off 1:off 2:off 3:off 4:off 5:on 6:off
network 0:off 1:off 2:on 3:on 4:on 5:on 6:off
rhnsd 0:off 1:off 2:on 3:on 4:on 5:on 6:off
...
Using chkconfig, however, will not show you the whole list of services on your system. To see
the systemd-specific services, run the systemctl list-unit-files command, as described earlier.
Customizing motd
You can have the MOTD (message of the day) display messages that may be unique to the machine. One way
to do this is to create a script that runs when a user logs on to the system.
First, create a script in /etc/profile.d = touch motd.sh
Make it executable = chmod a+x motd.sh (make sure it has the extension as .sh)
#!/bin/bash
#
echo -e "
##################################
#
# Welcome to `hostname`
# This system is running `cat /etc/redhat-release`
# kernel is `uname -r`
#
# You are logged in as `whoami`
#
##################################
"
PrintMotd no
#####################################
#
# Welcome to MyFirstLinuxVM
# This system is running CentOS Linux release 7.5.1804 (Core)
# kernel is 3.10.0-862.el7.x86_64
#
# You are logged in as iafzal
#
#####################################
Steps for NFS Server Configuration
• Once the packages are installed, enable and start NFS services
# mkdir /mypretzels
# exportfs -rv
• Once the packages are installed enable and start rpcbind service
# mkdir /mnt/kramer
# df –h
• To unmount
# umount /mnt/kramer
Red Hat Enterprise Linux 7
Milan Navrátil
Red Hat Customer Content Services
Jacquelynn East
Red Hat Customer Content Services
Don Domingo
Red Hat Customer Content Services
Josef Bacik
Server Development Kernel File System
[email protected]
Disk Quotas
Kamil Dudka
Base Operating System Core Services - BRNO
[email protected]
Access Control Lists
Hans de Goede
Base Operating System Installer
[email protected]
Partitions
Harald Hoyer
Engineering Software Engineering
[email protected]
File Systems
Dennis Keefe
Base Operating Systems Kernel Storage
[email protected]
VDO
Doug Ledford
Server Development Hardware Enablement
[email protected]
RAID
Daniel Novotny
Base Operating System Core Services - BRNO
[email protected]
The /proc File System
Nathan Straz
Quality Engineering QE - Platform
[email protected]
Legal Notice
GFS2
Copyright
Andy Walsh© 2018 Red Hat, Inc.
Base Operating Systems Kernel Storage
This document is licensed by Red Hat under the Creative Commons Attribution-ShareAlike 3.0
[email protected]
Unported
VDO License. If you distribute this document, or a modified version of it, you must provide
attribution to Red Hat, Inc. and provide a link to the original. If the document is modified, all Red Hat
David Wysochanski
trademarks must be removed.
Server Development Kernel Storage
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert,
[email protected]
Section
LVM/LVM2 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat,Christie
Michael Red Hat Enterprise Linux, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity
logo, and RHCE are trademarks
Server Development of Red Hat, Inc., registered in the United States and other
Kernel Storage
countries.
[email protected]
Online Storage
Linux ® is the registered trademark of Linus Torvalds in the United States and other countries.
Sachin Prabhu
Java ® is a registered trademark of Oracle and/or its affiliates.
Software Maintenance Engineering
[email protected]
XFS ® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States
NFS
and/or other countries.
Rob
MySQL Evers
® is a registered trademark of MySQL AB in the United States, the European Union and
Server Development
other countries. Kernel Storage
[email protected]
Online Storage
Node.js ® is an official trademark of Joyent. Red Hat Software Collections is not formally related to
or endorsed by the official Joyent Node.js open source or commercial project.
David Howells
Server
The OpenStack
Development
® WordHardware
Mark and
Enablement
OpenStack logo are either registered trademarks/service marks
[email protected]
or trademarks/service marks of the OpenStack Foundation, in the United States and other countries
FS-Cache
and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or
sponsored by the OpenStack Foundation, or the OpenStack community.
David Lehman
Base
All other
Operating
trademarks
Systemare Installer
the property of their respective owners.
[email protected]
Storage configuration during installation
Abstract
Jeff Moyer
This guide
Server provides instructions
Development Kernel File on how to effectively manage storage devices and file systems on
System
Red Hat Enterprise
[email protected] Linux 7. It is intended for use by system administrators with basic to
intermediate knowledge of Red Hat Enterprise Linux or Fedora.
Solid-State Disks
Eric Sandeen
Server Development Kernel File System
[email protected]
ext3, ext4, XFS, Encrypted File Systems
Mike Snitzer
Server Development Kernel Storage
[email protected]
I/O Stack and Limits
Red Hat Subject Matter Experts
Contributors
Edited by
Marek Suchánek
Red Hat Customer Content Services
[email protected]
Apurva Bhide
Red Hat Customer Content Services
[email protected]
Table of Contents
Table of Contents
.CHAPTER
. . . . . . . . .1.. .OVERVIEW
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7. . . . . . . . . .
1.1. NEW FEATURES AND ENHANCEMENTS IN RED HAT ENTERPRISE LINUX 7 7
. . . . . .I.. FILE
PART . . . . .SYSTEMS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9. . . . . . . . . .
.CHAPTER
. . . . . . . . .2.. .FILE
. . . . SYSTEM
. . . . . . . . STRUCTURE
. . . . . . . . . . . .AND
. . . . MAINTENANCE
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10
...........
2.1. OVERVIEW OF FILESYSTEM HIERARCHY STANDARD (FHS) 10
2.2. SPECIAL RED HAT ENTERPRISE LINUX FILE LOCATIONS 18
2.3. THE /PROC VIRTUAL FILE SYSTEM 18
2.4. DISCARD UNUSED BLOCKS 19
.CHAPTER
. . . . . . . . .3.. .THE
. . . .XFS
. . . .FILE
. . . . SYSTEM
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .20
...........
3.1. CREATING AN XFS FILE SYSTEM 21
3.2. MOUNTING AN XFS FILE SYSTEM 22
3.3. XFS QUOTA MANAGEMENT 23
3.4. INCREASING THE SIZE OF AN XFS FILE SYSTEM 25
3.5. REPAIRING AN XFS FILE SYSTEM 26
3.6. SUSPENDING AN XFS FILE SYSTEM 26
3.7. BACKING UP AND RESTORING XFS FILE SYSTEMS 27
3.8. CONFIGURING ERROR BEHAVIOR 30
3.9. OTHER XFS FILE SYSTEM UTILITIES 32
3.10. MIGRATING FROM EXT4 TO XFS 33
. . . . . . . . . .4.. .THE
CHAPTER . . . .EXT3
. . . . .FILE
. . . . SYSTEM
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .37
...........
4.1. CREATING AN EXT3 FILE SYSTEM 38
4.2. CONVERTING TO AN EXT3 FILE SYSTEM 39
4.3. REVERTING TO AN EXT2 FILE SYSTEM 39
.CHAPTER
. . . . . . . . .5.. .THE
. . . .EXT4
. . . . .FILE
. . . . SYSTEM
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .41
...........
5.1. CREATING AN EXT4 FILE SYSTEM 42
5.2. MOUNTING AN EXT4 FILE SYSTEM 44
5.3. RESIZING AN EXT4 FILE SYSTEM 44
5.4. BACKING UP EXT2, EXT3, OR EXT4 FILE SYSTEMS 45
5.5. RESTORING EXT2, EXT3, OR EXT4 FILE SYSTEMS 46
5.6. OTHER EXT4 FILE SYSTEM UTILITIES 49
.CHAPTER
. . . . . . . . .6.. .BTRFS
. . . . . . (TECHNOLOGY
. . . . . . . . . . . . . . PREVIEW)
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .51
...........
6.1. CREATING A BTRFS FILE SYSTEM 51
6.2. MOUNTING A BTRFS FILE SYSTEM 51
6.3. RESIZING A BTRFS FILE SYSTEM 52
6.4. INTEGRATED VOLUME MANAGEMENT OF MULTIPLE DEVICES 55
6.5. SSD OPTIMIZATION 58
6.6. BTRFS REFERENCES 59
. . . . . . . . . .7.. .GLOBAL
CHAPTER . . . . . . . .FILE
. . . . SYSTEM
. . . . . . . . 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .60
...........
.CHAPTER
. . . . . . . . .8.. .NETWORK
. . . . . . . . . .FILE
. . . .SYSTEM
. . . . . . . .(NFS)
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .61
...........
8.1. INTRODUCTION TO NFS 61
8.2. PNFS 64
8.3. CONFIGURING NFS CLIENT 65
8.4. AUTOFS 66
8.5. COMMON NFS MOUNT OPTIONS 72
8.6. STARTING AND STOPPING THE NFS SERVER 74
8.7. CONFIGURING THE NFS SERVER 75
1
Storage Administration Guide
.CHAPTER
. . . . . . . . .9.. .SERVER
. . . . . . . .MESSAGE
. . . . . . . . . BLOCK
. . . . . . . (SMB)
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .89
...........
9.1. PROVIDING SMB SHARES 89
9.2. MOUNTING AN SMB SHARE 89
.CHAPTER
. . . . . . . . .10.
. . .FS-CACHE
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .95
...........
10.1. PERFORMANCE GUARANTEE 96
10.2. SETTING UP A CACHE 96
10.3. USING THE CACHE WITH NFS 97
10.4. SETTING CACHE CULL LIMITS 99
10.5. STATISTICAL INFORMATION 100
10.6. FS-CACHE REFERENCES 100
. . . . . .II.
PART . .STORAGE
. . . . . . . . . ADMINISTRATION
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .101
............
. . . . . . . . . .11.
CHAPTER . . .STORAGE
. . . . . . . . . CONSIDERATIONS
. . . . . . . . . . . . . . . . .DURING
. . . . . . . .INSTALLATION
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .102
............
11.1. SPECIAL CONSIDERATIONS 102
.CHAPTER
. . . . . . . . .12.
. . .FILE
. . . . SYSTEM
. . . . . . . . CHECK
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .104
............
12.1. BEST PRACTICES FOR FSCK 104
12.2. FILE SYSTEM-SPECIFIC INFORMATION FOR FSCK 105
. . . . . . . . . .13.
CHAPTER . . .PARTITIONS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .109
............
Manipulating Partitions on Devices in Use 109
Modifying the Partition Table 109
13.1. VIEWING THE PARTITION TABLE 110
13.2. CREATING A PARTITION 112
13.3. REMOVING A PARTITION 115
13.4. SETTING A PARTITION TYPE 116
13.5. RESIZING A PARTITION WITH FDISK 116
. . . . . . . . . .14.
CHAPTER . . .CREATING
. . . . . . . . . .AND
. . . . MAINTAINING
. . . . . . . . . . . . .SNAPSHOTS
. . . . . . . . . . . WITH
. . . . . SNAPPER
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .119
............
14.1. CREATING INITIAL SNAPPER CONFIGURATION 119
14.2. CREATING A SNAPPER SNAPSHOT 120
14.3. TRACKING CHANGES BETWEEN SNAPPER SNAPSHOTS 124
14.4. REVERSING CHANGES IN BETWEEN SNAPSHOTS 127
14.5. DELETING A SNAPPER SNAPSHOT 129
. . . . . . . . . .15.
CHAPTER . . .SWAP
. . . . . .SPACE
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .130
............
15.1. ADDING SWAP SPACE 131
15.2. REMOVING SWAP SPACE 133
15.3. MOVING SWAP SPACE 135
. . . . . . . . . .16.
CHAPTER . . .SYSTEM
. . . . . . . .STORAGE
. . . . . . . . . MANAGER
. . . . . . . . . .(SSM)
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .136
............
16.1. SSM BACK ENDS 136
16.2. COMMON SSM TASKS 138
16.3. SSM RESOURCES 145
. . . . . . . . . .17.
CHAPTER . . .DISK
. . . . .QUOTAS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .147
............
17.1. CONFIGURING DISK QUOTAS 147
17.2. MANAGING DISK QUOTAS 152
17.3. DISK QUOTA REFERENCES 154
2
Table of Contents
.CHAPTER
. . . . . . . . .18.
. . .REDUNDANT
. . . . . . . . . . . .ARRAY
. . . . . . .OF
. . .INDEPENDENT
. . . . . . . . . . . . . DISKS
. . . . . . (RAID)
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .156
............
18.1. RAID TYPES 156
18.2. RAID LEVELS AND LINEAR SUPPORT 157
18.3. LINUX RAID SUBSYSTEMS 159
18.4. RAID SUPPORT IN THE ANACONDA INSTALLER 159
18.5. CONVERTING ROOT DISK TO RAID1 AFTER INSTALLATION 160
18.6. CONFIGURING RAID SETS 160
18.7. CREATING ADVANCED RAID DEVICES 161
. . . . . . . . . .19.
CHAPTER . . .USING
. . . . . .THE
. . . . MOUNT
. . . . . . . COMMAND
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .163
............
19.1. LISTING CURRENTLY MOUNTED FILE SYSTEMS 163
19.2. MOUNTING A FILE SYSTEM 164
19.3. UNMOUNTING A FILE SYSTEM 173
19.4. MOUNT COMMAND REFERENCES 174
. . . . . . . . . .20.
CHAPTER . . .THE
. . . .VOLUME_KEY
. . . . . . . . . . . . .FUNCTION
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .175
............
20.1. VOLUME_KEY COMMANDS 175
20.2. USING VOLUME_KEY AS AN INDIVIDUAL USER 176
20.3. USING VOLUME_KEY IN A LARGER ORGANIZATION 177
20.4. VOLUME_KEY REFERENCES 179
.CHAPTER
. . . . . . . . .21.
. . .SOLID-STATE
. . . . . . . . . . . . DISK
. . . . . DEPLOYMENT
. . . . . . . . . . . . . GUIDELINES
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .180
............
Deployment Considerations 180
Performance Tuning Considerations 182
. . . . . . . . . .22.
CHAPTER . . .WRITE
. . . . . . BARRIERS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .183
............
22.1. IMPORTANCE OF WRITE BARRIERS 183
22.2. ENABLING AND DISABLING WRITE BARRIERS 183
22.3. WRITE BARRIER CONSIDERATIONS 184
. . . . . . . . . .23.
CHAPTER . . .STORAGE
. . . . . . . . . I/O
. . . ALIGNMENT
. . . . . . . . . . . AND
. . . . .SIZE
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .186
............
23.1. PARAMETERS FOR STORAGE ACCESS 186
23.2. USERSPACE ACCESS 187
23.3. I/O STANDARDS 188
23.4. STACKING I/O PARAMETERS 189
23.5. LOGICAL VOLUME MANAGER 189
23.6. PARTITION AND FILE SYSTEM TOOLS 189
. . . . . . . . . .24.
CHAPTER . . .SETTING
. . . . . . . . UP
. . . A. .REMOTE
. . . . . . . . DISKLESS
. . . . . . . . . .SYSTEM
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .191
............
24.1. CONFIGURING A TFTP SERVICE FOR DISKLESS CLIENTS 191
24.2. CONFIGURING DHCP FOR DISKLESS CLIENTS 192
24.3. CONFIGURING AN EXPORTED FILE SYSTEM FOR DISKLESS CLIENTS 193
. . . . . . . . . .25.
CHAPTER . . .ONLINE
. . . . . . . STORAGE
. . . . . . . . . MANAGEMENT
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .195
............
25.1. TARGET SETUP 195
25.2. CREATING AN ISCSI INITIATOR 204
25.3. FIBRE CHANNEL 205
25.4. CONFIGURING A FIBRE CHANNEL OVER ETHERNET INTERFACE 208
25.5. CONFIGURING AN FCOE INTERFACE TO AUTOMATICALLY MOUNT AT BOOT 209
25.6. ISCSI 211
25.7. PERSISTENT NAMING 212
25.8. REMOVING A STORAGE DEVICE 216
25.9. REMOVING A PATH TO A STORAGE DEVICE 218
25.10. ADDING A STORAGE DEVICE OR PATH 218
25.11. SCANNING STORAGE INTERCONNECTS 220
3
Storage Administration Guide
. . . . . . . . . .26.
CHAPTER . . .DEVICE
. . . . . . .MAPPER
. . . . . . . . MULTIPATHING
. . . . . . . . . . . . . . AND
. . . . .VIRTUAL
. . . . . . . . STORAGE
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .240
............
26.1. VIRTUAL STORAGE 240
26.2. DM-MULTIPATH 240
.CHAPTER
. . . . . . . . .27.
. . .EXTERNAL
. . . . . . . . . .ARRAY
. . . . . . .MANAGEMENT
. . . . . . . . . . . . . .(LIBSTORAGEMGMT)
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .242
............
27.1. INTRODUCTION TO LIBSTORAGEMGMT 242
27.2. LIBSTORAGEMGMT TERMINOLOGY 243
27.3. INSTALLING LIBSTORAGEMGMT 245
27.4. USING LIBSTORAGEMGMT 246
. . . . . . . . . .28.
CHAPTER . . .PERSISTENT
. . . . . . . . . . . .MEMORY:
. . . . . . . . .NVDIMMS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .251
............
NVDIMMs Interleaving 251
Persistent Memory Access Modes 251
28.1. CONFIGURING PERSISTENT MEMORY WITH NDCTL 252
28.2. CONFIGURING PERSISTENT MEMORY FOR USE AS A BLOCK DEVICE (LEGACY MODE) 255
28.3. CONFIGURING PERSISTENT MEMORY FOR FILE SYSTEM DIRECT ACCESS (DAX) 255
28.4. CONFIGURING PERSISTENT MEMORY FOR USE IN DEVICE DAX MODE 256
28.5. TROUBLESHOOTING 257
. . . . . .III.
PART . . DATA
. . . . . .DEDUPLICATION
. . . . . . . . . . . . . . . AND
. . . . .COMPRESSION
. . . . . . . . . . . . . .WITH
. . . . .VDO
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .258
............
. . . . . . . . . .29.
CHAPTER . . .VDO
. . . . INTEGRATION
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .259
............
29.1. THEORETICAL OVERVIEW OF VDO 259
29.2. SYSTEM REQUIREMENTS 262
29.3. GETTING STARTED WITH VDO 265
29.4. ADMINISTERING VDO 269
29.5. DEPLOYMENT SCENARIOS 278
29.6. TUNING VDO 279
29.7. VDO COMMANDS 285
29.8. STATISTICS FILES IN /SYS 303
. . . . . . . . . .30.
CHAPTER . . .VDO
. . . . EVALUATION
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .305
............
30.1. INTRODUCTION 305
30.2. TEST ENVIRONMENT PREPARATIONS 305
30.3. DATA EFFICIENCY TESTING PROCEDURES 309
30.4. PERFORMANCE TESTING PROCEDURES 317
30.5. ISSUE REPORTING 322
30.6. CONCLUSION 323
. . . . . . . . . . A.
APPENDIX . . .RED
. . . .HAT
. . . . CUSTOMER
. . . . . . . . . . .PORTAL
. . . . . . . .LABS
. . . . . RELEVANT
. . . . . . . . . . TO
. . . STORAGE
. . . . . . . . . .ADMINISTRATION
. . . . . . . . . . . . . . . . . . . . . . .324
............
SCSI DECODER 324
FILE SYSTEM LAYOUT CALCULATOR 324
LVM RAID CALCULATOR 324
ISCSI HELPER 324
4
Table of Contents
. . . . . . . . . . B.
APPENDIX . . .REVISION
. . . . . . . . .HISTORY
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .326
............
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .327
INDEX ............
5
Storage Administration Guide
6
CHAPTER 1. OVERVIEW
CHAPTER 1. OVERVIEW
The Storage Administration Guide contains extensive information on supported file systems and data
storage features in Red Hat Enterprise Linux 7. This book is intended as a quick reference for
administrators managing single-node (that is, non-clustered) storage solutions.
The Storage Administration Guide is split into the following parts: File Systems, Storage Administration,
and Data Deduplication and Compression with VDO.
The File Systems part details the various file systems Red Hat Enterprise Linux 7 supports. It describes
them and explains how best to utilize them.
The Storage Administration part details the various tools and storage administration tasks Red Hat
Enterprise Linux 7 supports. It describes them and explains how best to utilize them.
The Data Deduplication and Compression with VDO part describes the Virtual Data Optimizer (VDO). It
explains how to use VDO to reduce your storage requirements.
Snapper
Red Hat Enterprise Linux 7 introduces a new tool called Snapper that allows for the easy creation and
management of snapshots for LVM and Btrfs. For more information, see Chapter 14, Creating and
Maintaining Snapshots with Snapper.
7
Storage Administration Guide
NOTE
Btrfs is available as a Technology Preview feature in Red Hat Enterprise Linux 7 but has
been deprecated since the Red Hat Enterprise Linux 7.4 release. It will be removed in a
future major release of Red Hat Enterprise Linux.
For more information, see Deprecated Functionality in the Red Hat Enterprise Linux 7.4
Release Notes.
Btrfs is a local file system that aims to provide better performance and scalability, including integrated
LVM operations. This file system is not fully supported by Red Hat and as such is a technology preview.
For more information on Btrfs, see Chapter 6, Btrfs (Technology Preview).
8
PART I. FILE SYSTEMS
NOTE
Btrfs is available as a Technology Preview feature in Red Hat Enterprise Linux 7 but has
been deprecated since the Red Hat Enterprise Linux 7.4 release. It will be removed in a
future major release of Red Hat Enterprise Linux.
For more information, see Deprecated Functionality in the Red Hat Enterprise Linux 7.4
Release Notes.
For an overview of Red Hat Enterprise Linux file systems and storage limits, see Red Hat
Enterprise Linux technology capabilities and limits at Red Hat Knowledgebase.
XFS is the default file system in Red Hat Enterprise Linux 7 and Red Hat, and Red Hat recommends you
to use XFS unless you have a strong reason to use another file system. For general information on
common file systems and their properties, see the following Red Hat Knowledgebase article: How to
Choose your Red Hat Enterprise Linux File System.
9
Storage Administration Guide
Categorizing files in this manner helps correlate the function of each file with the permissions assigned
to the directories which hold them. How the operating system and its users interact with a file determines
the directory in which it is placed, whether that directory is mounted with read-only or read and write
permissions, and the level of access each user has to that file. The top level of this organization is
crucial; access to the underlying directories can be restricted, otherwise security problems could arise if,
from the top level down, access rules do not adhere to a rigid structure.
The FHS document is the authoritative reference to any FHS-compliant file system, but the standard
leaves many areas undefined or extensible. This section is an overview of the standard and a description
of the parts of the file system not covered by the standard.
The ability to mount a /usr/ partition as read-only. This is crucial, since /usr/ contains
common executables and should not be changed by users. In addition, since /usr/ is mounted
as read-only, it should be mountable from the CD-ROM drive or from another machine via a
read-only NFS mount.
NOTE
The directories that are available depend on what is installed on any given system. The
following lists are only an example of what may be found.
10
CHAPTER 2. FILE SYSTEM STRUCTURE AND MAINTENANCE
df Command
The df command reports the system's disk space usage. Its output looks similar to the following:
By default, df shows the partition size in 1 kilobyte blocks and the amount of used and available disk
space in kilobytes. To view the information in megabytes and gigabytes, use the command df -h. The
-h argument stands for "human-readable" format. The output for df -h looks similar to the following:
NOTE
In the given examples, the mounted partition /dev/shm represents the system's virtual
memory file system.
du Command
The du command displays the estimated amount of space being used by files in a directory, displaying
the disk usage of each subdirectory. The last line in the output of du shows the total disk usage of the
directory. To see only the total disk usage of a directory in human-readable format, use du -hs. For
more options, see man du.
To view the system's partitions and disk space usage in a graphical format, use the Gnome System
Monitor by clicking on Applications → System Tools → System Monitor or using the command
gnome-system-monitor. Select the File Systems tab to view the system's partitions. The following
figure illustrates the File Systems tab.
11
Storage Administration Guide
The /boot/ directory contains static files required to boot the system, for example, the Linux kernel.
These files are essential for the system to boot properly.
WARNING
Do not remove the /boot/ directory. Doing so renders the system unbootable.
The /dev/ directory contains device nodes that represent the following device types:
These device nodes are essential for the system to function properly. The udevd daemon creates and
removes device nodes in /dev/ as needed.
Devices in the /dev/ directory and subdirectories are defined as either character (providing only a serial
12
CHAPTER 2. FILE SYSTEM STRUCTURE AND MAINTENANCE
stream of input and output, for example, mouse or keyboard) or block (accessible randomly, such as a
hard drive or a floppy drive). If GNOME or KDE is installed, some storage devices are automatically
detected when connected (such as with USB) or inserted (such as a CD or DVD drive), and a pop-up
window displaying the contents appears.
File Description
Mapped device
A logical volume in a volume group, for example, /dev/mapper/VolGroup00-LogVol02.
Static device
A traditional storage volume, for example, /dev/sdbX, where sdb is a storage device name and X is
the partition number. /dev/sdbX can also be /dev/disk/by-id/WWID, or /dev/disk/by-
uuid/UUID. For more information, see Section 25.7, “Persistent Naming”.
The /etc/ directory is reserved for configuration files that are local to the machine. It should not contain
any binaries; if there are any binaries, move them to /usr/bin/ or /usr/sbin/.
For example, the /etc/skel/ directory stores "skeleton" user files, which are used to populate a home
directory when a user is first created. Applications also store their configuration files in this directory and
may reference them when executed. The /etc/exports file controls which file systems export to
remote hosts.
The /mnt/ directory is reserved for temporarily mounted file systems, such as NFS file system mounts.
For all removable storage media, use the /media/ directory. Automatically detected removable media is
mounted in the /media directory.
13
Storage Administration Guide
IMPORTANT
The /opt/ directory is normally reserved for software and add-on packages that are not part of the
default installation. A package that installs to /opt/ creates a directory bearing its name, for example,
/opt/packagename/. In most cases, such packages follow a predictable subdirectory structure; most
store their binaries in /opt/packagename/bin/ and their man pages in /opt/packagename/man/.
The /proc/ directory contains special files that either extract information from the kernel or send
information to it. Examples of such information include system memory, CPU information, and hardware
configuration. For more information about /proc/, see Section 2.3, “The /proc Virtual File System”.
The /srv/ directory contains site-specific data served by a Red Hat Enterprise Linux system. This
directory gives users the location of data files for a particular service, such as FTP, WWW, or CVS. Data
that only pertains to a specific user should go in the /home/ directory.
The /sys/ directory utilizes the new sysfs virtual file system specific to the kernel. With the increased
support for hot plug hardware devices in the kernel, the /sys/ directory contains information similar to
that held by /proc/, but displays a hierarchical view of device information specific to hot plug devices.
The /usr/ directory is for files that can be shared across multiple machines. The /usr/ directory is
often on its own partition and is mounted read-only. At a minimum, /usr/ should contain the following
subdirectories:
/usr/bin
This directory is used for binaries.
/usr/etc
This directory is used for system-wide configuration files.
/usr/games
This directory stores games.
/usr/include
This directory is used for C header files.
/usr/kerberos
This directory is used for Kerberos-related binaries and files.
14
CHAPTER 2. FILE SYSTEM STRUCTURE AND MAINTENANCE
/usr/lib
This directory is used for object files and libraries that are not designed to be directly utilized by shell
scripts or users.
As of Red Hat Enterprise Linux 7.0, the /lib/ directory has been merged with /usr/lib. Now it
also contains libraries needed to execute the binaries in /usr/bin/ and /usr/sbin/. These
shared library images are used to boot the system or execute commands within the root file system.
/usr/libexec
This directory contains small helper programs called by other programs.
/usr/sbin
As of Red Hat Enterprise Linux 7.0, /sbin has been moved to /usr/sbin. This means that it
contains all system administration binaries, including those essential for booting, restoring,
recovering, or repairing the system. The binaries in /usr/sbin/ require root privileges to use.
/usr/share
This directory stores files that are not architecture-specific.
/usr/src
This directory stores source code.
The /usr/ directory should also contain a /local/ subdirectory. As per the FHS, this subdirectory is
used by the system administrator when installing software locally, and should be safe from being
overwritten during system updates. The /usr/local directory has a structure similar to /usr/, and
contains the following subdirectories:
/usr/local/bin
/usr/local/etc
/usr/local/games
/usr/local/include
/usr/local/lib
/usr/local/libexec
/usr/local/sbin
/usr/local/share
/usr/local/src
Red Hat Enterprise Linux's usage of /usr/local/ differs slightly from the FHS. The FHS states that
/usr/local/ should be used to store software that should remain safe from system software
upgrades. Since the RPM Package Manager can perform software upgrades safely, it is not necessary
15
Storage Administration Guide
Instead, Red Hat Enterprise Linux uses /usr/local/ for software local to the machine. For instance, if
the /usr/ directory is mounted as a read-only NFS share from a remote host, it is still possible to install
a package or program under the /usr/local/ directory.
Since the FHS requires Linux to mount /usr/ as read-only, any programs that write log files or need
spool/ or lock/ directories should write them to the /var/ directory. The FHS states /var/ is for
variable data, which includes spool directories and files, logging data, transient and temporary files.
Following are some of the directories found within the /var/ directory:
/var/account/
/var/arpwatch/
/var/cache/
/var/crash/
/var/db/
/var/empty/
/var/ftp/
/var/gdm/
/var/kerberos/
/var/lib/
/var/local/
/var/lock/
/var/log/
/var/mailman/
/var/named/
/var/nis/
/var/opt/
/var/preserve/
/var/run/
/var/spool/
16
CHAPTER 2. FILE SYSTEM STRUCTURE AND MAINTENANCE
/var/tmp/
/var/tux/
/var/www/
/var/yp/
IMPORTANT
System log files, such as messages and lastlog, go in the /var/log/ directory. The
/var/lib/rpm/ directory contains RPM system databases. Lock files go in the /var/lock/ directory,
usually in directories for the program using the file. The /var/spool/ directory has subdirectories that
store data files for some programs. These subdirectories include:
/var/spool/at/
/var/spool/clientmqueue/
/var/spool/cron/
/var/spool/cups/
/var/spool/exim/
/var/spool/lpd/
/var/spool/mail/
/var/spool/mailman/
/var/spool/mqueue/
/var/spool/news/
/var/spool/postfix/
/var/spool/repackage/
/var/spool/rwho/
/var/spool/samba/
/var/spool/squid/
/var/spool/squirrelmail/
/var/spool/up2date/
/var/spool/uucp/
17
Storage Administration Guide
/var/spool/uucppublic/
/var/spool/vbox/
Most files pertaining to RPM are kept in the /var/lib/rpm/ directory. For more information on RPM,
see man rpm.
The /var/cache/yum/ directory contains files used by the Package Updater, including RPM header
information for the system. This location may also be used to temporarily store RPMs downloaded while
updating the system. For more information about the Red Hat Network, see https://rhn.redhat.com/.
Another location specific to Red Hat Enterprise Linux is the /etc/sysconfig/ directory. This directory
stores a variety of configuration information. Many scripts that run at boot time use the files in this
directory.
The /proc file system is not used for storage. Its main purpose is to provide a file-based interface to
hardware, memory, running processes, and other system components. Real-time information can be
retrieved on many system components by viewing the corresponding /proc file. Some of the files within
/proc can also be manipulated (by both users and applications) to configure the kernel.
The following /proc files are relevant in managing and monitoring system storage:
/proc/devices
Displays various character and block devices that are currently configured.
/proc/filesystems
Lists all file system types currently supported by the kernel.
/proc/mdstat
Contains current information on multiple-disk or RAID configurations on the system, if they exist.
/proc/mounts
Lists all mounts currently used by the system.
/proc/partitions
Contains partition block allocation information.
For more information about the /proc file system, see the Red Hat Enterprise Linux 7 Deployment
Guide.
18
CHAPTER 2. FILE SYSTEM STRUCTURE AND MAINTENANCE
Batch discard operations are run explicitly by the user with the fstrim command. This
command discards all unused blocks in a file system that match the user's criteria.
Online discard operations are specified at mount time, either with the -o discard option as
part of a mount command or with the discard option in the /etc/fstab file. They run in real
time without user intervention. Online discard operations only discard blocks that are
transitioning from used to free.
Both operation types are supported for use with ext4 file systems as of Red Hat Enterprise Linux 6.2 and
later and with XFS file systems since Red Hat Enterprise Linux 6.4. Also, the block device underlying the
file system must support physical discard operations. Physical discard operations are supported if the
value stored in the /sys/block/device/queue/discard_max_bytes file is not zero.
a logical device (LVM or MD) comprised of multiple devices, where any one of the device does
not support discard operations
fstrim -v /mnt/non_discard
fstrim: /mnt/non_discard: the discard operation is not supported
NOTE
The mount command allows you to mount a device that does not support discard
operations with the -o discard option.
Red Hat recommends batch discard operations unless the system's workload is such that batch discard
is not feasible, or online discard operations are necessary to maintain performance.
For more information, see the fstrim(8) and mount(8) man pages.
19
Storage Administration Guide
The XFS file system can be defragmented and enlarged while mounted and active.
In addition, Red Hat Enterprise Linux 7 supports backup and restore utilities specific to XFS.
Allocation Features
XFS features the following allocation schemes:
Extent-based allocation
Delayed allocation
Space pre-allocation
Delayed allocation and other performance optimizations affect XFS the same way that they do ext4.
Namely, a program's writes to an XFS file system are not guaranteed to be on-disk unless the
program issues an fsync() call afterwards.
For more information on the implications of delayed allocation on a file system (ext4 and XFS), see
Allocation Features in Chapter 5, The ext4 File System.
NOTE
Quota journaling
This avoids the need for lengthy quota consistency checks after a crash.
Project/directory quotas
This allows quota restrictions over a directory tree.
Subsecond timestamps
20
CHAPTER 3. THE XFS FILE SYSTEM
Procedure
# mkfs.xfs block_device
Replace block_device with the path to a partition or a logical volume. For example,
/dev/sdb1, /dev/disk/by-uuid/05e99ec8-def1-4a5e-8a9d-5945339ceb2a, or
/dev/my-volgroup/my-lv.
When using mkfs.xfs on a block device containing an existing file system, add the -f
option to overwrite that file system.
21
Storage Administration Guide
NOTE
After an XFS file system is created, its size cannot be reduced. However, it can still be
enlarged using the xfs_growfs command. For more information, see Section 3.4,
“Increasing the Size of an XFS File System”).
When creating filesystems on LVM or MD volumes, mkfs.xfs chooses an optimal geometry. This may
also be true on some hardware RAIDs that export geometry information to the operating system.
If the device exports stripe geometry information, the mkfs utility (for ext3, ext4, and xfs) will
automatically use this geometry. If stripe geometry is not detected by the mkfs utility and even though
the storage does, in fact, have stripe geometry, it is possible to manually specify it when creating the file
system using the following options:
su=value
Specifies a stripe unit or RAID chunk size. The value must be specified in bytes, with an optional k,
m, or g suffix.
sw=value
Specifies the number of data disks in a RAID device, or the number of stripe units in the stripe.
The following example specifies a chunk size of 64k on a RAID device containing 4 stripe units:
Additional Resources
For more information about creating XFS file systems, see:
The Red Hat Enterprise Linux Performance Tuning Guide, chapter Tuning XFS
NOTE
Unlike mke2fs, mkfs.xfs does not utilize a configuration file; they are all specified on
the command line.
Write Barriers
22
CHAPTER 3. THE XFS FILE SYSTEM
By default, XFS uses write barriers to ensure file system integrity even when power is lost to a device
with write caches enabled. For devices without write caches, or with battery-backed write caches,
disable the barriers by using the nobarrier option:
For more information about write barriers, see Chapter 22, Write Barriers.
When managing on a per-directory or per-project basis, XFS manages the disk usage of directory
hierarchies associated with a specific project. In doing so, XFS recognizes cross-organizational "group"
boundaries between projects. This provides a level of control that is broader than what is available when
managing quotas for users or groups.
XFS quotas are enabled at mount time, with specific mount options. Each mount option can also be
specified as noenforce; this allows usage reporting without enforcing any limits. Valid quota mount
options are:
Once quotas are enabled, the xfs_quota tool can be used to set limits and report on disk usage. By
default, xfs_quota is run interactively, and in basic mode. Basic mode subcommands simply report
usage, and are available to all users. Basic xfs_quota subcommands include:
quota username/userID
Show usage and limits for the given username or numeric userID
df
Shows free and used counts for blocks and inodes.
23
Storage Administration Guide
In contrast, xfs_quota also has an expert mode. The subcommands of this mode allow actual
configuration of limits, and are available only to users with elevated privileges. To use expert mode
subcommands interactively, use the following command:
# xfs_quota -x
report /path
Reports quota information for a specific file system.
limit
Modify quota limits.
For a complete list of subcommands for either basic or expert mode, use the subcommand help.
All subcommands can also be run directly from a command line using the -c option, with -x for expert
subcommands.
For example, to display a sample quota report for /home (on /dev/blockdevice), use the
command xfs_quota -x -c 'report -h' /home. This displays output similar to the following:
To set a soft and hard inode count limit of 500 and 700 respectively for user john, whose home
directory is /home/john, use the following command:
In this case, pass mount_point which is the mounted xfs file system.
By default, the limit subcommand recognizes targets as users. When configuring the limits for a
group, use the -g option (as in the previous example). Similarly, use -p for projects.
Soft and hard block limits can also be configured using bsoft or bhard instead of isoft or ihard.
For example, to set a soft and hard block limit of 1000m and 1200m, respectively, to group
accounting on the /target/path file system, use the following command:
24
CHAPTER 3. THE XFS FILE SYSTEM
NOTE
IMPORTANT
Quotas for projects with initialized directories can then be configured, with:
Generic quota configuration tools (quota, repquota, and edquota for example) may also be used to
manipulate XFS quotas. However, these tools cannot be used with XFS project quotas.
IMPORTANT
Red Hat recommends the use of xfs_quota over all other available tools.
For more information about setting XFS quotas, see man xfs_quota, man projid(5), and man
projects(5).
The -D size option grows the file system to the specified size (expressed in file system blocks).
Without the -D size option, xfs_growfs will grow the file system to the maximum size supported by
the device.
Before growing an XFS file system with -D size, ensure that the underlying block device is of an
appropriate size to hold the file system later. Use the appropriate resizing methods for the affected block
device.
25
Storage Administration Guide
NOTE
While XFS file systems can be grown while mounted, their size cannot be reduced at all.
For more information about growing a file system, see man xfs_growfs.
# xfs_repair /dev/device
The xfs_repair utility is highly scalable and is designed to repair even very large file systems with
many inodes efficiently. Unlike other Linux file systems, xfs_repair does not run at boot time, even
when an XFS file system was not cleanly unmounted. In the event of an unclean unmount, xfs_repair
simply replays the log at mount time, ensuring a consistent file system.
WARNING
The xfs_repair utility cannot repair an XFS file system with a dirty log. To clear
the log, mount and unmount the XFS file system. If the log is corrupt and cannot be
replayed, use the -L option ("force log zeroing") to clear the log, that is,
xfs_repair -L /dev/device. Be aware that this may result in further
corruption or data loss.
For more information about repairing an XFS file system, see man xfs_repair.
# xfs_freeze mount-point
Suspending write activity allows hardware-based device snapshots to be used to capture the file system
in a consistent state.
NOTE
The xfs_freeze utility is provided by the xfsprogs package, which is only available on
x86_64.
# xfs_freeze -f /mount/point
26
CHAPTER 3. THE XFS FILE SYSTEM
# xfs_freeze -u /mount/point
When taking an LVM snapshot, it is not necessary to use xfs_freeze to suspend the file system first.
Rather, the LVM management tools will automatically suspend the XFS file system before taking the
snapshot.
For more information about freezing and unfreezing an XFS file system, see man xfs_freeze.
Backup
The xfsdump utility also allows you to write multiple backups to the same tape. A backup can
span multiple tapes.
To back up multiple file systems to a single tape device, simply write the backup to a tape that
already contains an XFS backup. This appends the new backup to the previous one. By default,
xfsdump never overwrites existing backups.
The xfsdump utility uses dump levels to determine a base backup to which other backups are
relative. Numbers from 0 to 9 refer to increasing dump levels. An incremental backup only backs
up files that have changed since the last dump of a lower level:
A level 1 dump is the first incremental backup after a full backup. The next incremental
backup would be level 2, which only backs up files that have changed since the last level 1
dump; and so on, to a maximum of level 9.
Exclude files from a backup using size, subtree, or inode flags to filter them.
Restoration
The xfsrestore utility restores file systems from backups produced by xfsdump. The xfsrestore utility
has two modes:
27
Storage Administration Guide
The simple mode enables users to restore an entire file system from a level 0 dump. This is the
default mode.
The cumulative mode enables file system restoration from an incremental backup: that is, level 1
to level 9.
A unique session ID or session label identifies each backup. Restoring a backup from a tape containing
multiple backups requires its corresponding session ID or label.
To extract, add, or delete specific files from a backup, enter the xfsrestore interactive mode. The
interactive mode provides a set of commands to manipulate the backup files.
Replace level with the dump level of your backup. Use 0 to perform a full backup or 1 to 9 to
perform consequent incremental backups.
Replace backup-destination with the path where you want to store your backup. The
destination can be a regular file, a tape drive, or a remote tape device. For example,
/backup-files/Data.xfsdump for a file or /dev/st0 for a tape drive.
Replace path-to-xfs-filesystem with the mount point of the XFS file system you want to back
up. For example, /mnt/data/. The file system must be mounted.
When backing up multiple file systems and saving them on a single tape device, add a
session label to each backup using the -L label option so that it is easier to identify them
when restoring. Replace label with any name for your backup: for example, backup_data.
To back up the content of XFS file systems mounted on the /boot/ and /data/ directories
and save them as files in the /backup-files/ directory:
To back up multiple file systems on a single tape device, add a session label to each backup
using the -L label option:
Additional Resources
For more information about backing up XFS file systems, see the xfsdump(8) man page.
28
CHAPTER 3. THE XFS FILE SYSTEM
For more information about backing up XFS file systems, see the xfsdump(8) man page.
Prerequisites
You need a file or tape backup of XFS file systems, as described in Section 3.7.2, “Backing Up
an XFS File System”.
The command to restore the backup varies depending on whether you are restoring from a full
backup or an incremental one, or are restoring multiple backups from a single tape device:
Replace backup-location with the location of the backup. This can be a regular file, a tape
drive, or a remote tape device. For example, /backup-files/Data.xfsdump for a file or
/dev/st0 for a tape drive.
Replace restoration-path with the path to the directory where you want to restore the file
system. For example, /mnt/data/.
To restore a file system from an incremental (level 1 to level 9) backup, add the -r option.
To restore a backup from a tape device that contains multiple backups, specify the backup
using the -S or -L options.
The -S lets you choose a backup by its session ID, while the -L lets you choose by the
session label. To obtain the session ID and session labels, use the xfsrestore -I
command.
Replace session-id with the session ID of the backup. For example, b74a3586-e52e-
4a4a-8775-c3334fa8ea2c. Replace session-label with the session label of the backup.
For example, my_backup_session_label.
The interactive dialog begins after xfsrestore finishes reading the specified device.
Available commands in the interactive xfsrestore shell include cd, ls, add, delete, and
extract; for a complete list of commands, use the help command.
To restore the XFS backup files and save their content into directories under /mnt/:
To restore from a tape device containing multiple backups, specify each backup by its session label
or session ID:
29
Storage Administration Guide
The informational messages keep appearing until the matching backup is found.
Additional Resources
For more information about restoring XFS file systems, see the xfsrestore(8) man page.
XFS currently recognizes the following error conditions for which you can configure the desired behavior
specifically:
All other possible error conditions, which do not have specific handlers defined, share a single, global
configuration.
You can set the conditions under which XFS deems the errors permanent, both in the maximum number
of retries and the maximum time in seconds. XFS stops retrying when any one of the conditions is met.
30
CHAPTER 3. THE XFS FILE SYSTEM
There is also an option to immediately cancel the retries when unmounting the file system, regardless of
any other configuration. This allows the unmount operation to succeed even in case of persistent errors.
/sys/fs/xfs/device/error/metadata/condition/retry_timeout_seconds: the
time limit in seconds after which XFS will stop retrying the operation
All other possible error conditions, apart from those described in the previous section, share a common
configuration in these files:
3.8.2. Setting File System Behavior for Specific and Undefined Conditions
To set the maximum number of retries, write the desired number to the max_retries file.
value is a number between -1 and the maximum possible value of int, the C signed integer type. This
is 2147483647 on 64-bit Linux.
To set the time limit, write the desired number of seconds to the retry_timeout_seconds file.
31
Storage Administration Guide
value is a number between -1 and 86400, which is the number of seconds in a day.
In both the max_retries and retry_timeout_seconds options, -1 means to retry forever and 0 to
stop immediately.
device is the name of the device, as found in the /dev/ directory; for example, sda.
NOTE
The default behavior for a each error condition is dependent on the error context. Some
errors, like ENODEV, are considered to be fatal and unrecoverable, regardless of the retry
count, so their default value is 0.
value is either 1 or 0:
device is the name of the device, as found in the /dev/ directory; for example, sda.
IMPORTANT
xfs_fsr
32
CHAPTER 3. THE XFS FILE SYSTEM
Used to defragment mounted XFS file systems. When invoked with no arguments, xfs_fsr
defragments all regular files in all mounted XFS file systems. This utility also allows users to suspend
a defragmentation at a specified time and resume from where it left off later.
In addition, xfs_fsr also allows the defragmentation of only one file, as in xfs_fsr
/path/to/file. Red Hat advises not to periodically defrag an entire file system because XFS
avoids fragmentation by default. System wide defragmentation could cause the side effect of
fragmentation in free space.
xfs_bmap
Prints the map of disk blocks used by files in an XFS filesystem. This map lists each extent used by a
specified file, as well as regions in the file with no corresponding blocks (that is, holes).
xfs_info
Prints XFS file system information.
xfs_admin
Changes the parameters of an XFS file system. The xfs_admin utility can only modify parameters of
unmounted devices or file systems.
xfs_copy
Copies the contents of an entire XFS file system to one or more targets in parallel.
The following utilities are also useful in debugging and analyzing XFS file systems:
xfs_metadump
Copies XFS file system metadata to a file. Red Hat only supports using the xfs_metadump utility to
copy unmounted file systems or read-only mounted file systems; otherwise, generated dumps could
be corrupted or inconsistent.
xfs_mdrestore
Restores an XFS metadump image (generated using xfs_metadump) to a file system image.
xfs_db
Debugs an XFS file system.
For more information about these utilities, see their respective man pages.
The ext4 file system is still fully supported in Red Hat Enterprise Linux 7 and can be selected at
installation. While it is possible to migrate from ext4 to XFS, it is not required.
33
Storage Administration Guide
Ext3/4 runs e2fsck in userspace at boot time to recover the journal as needed. XFS, by comparison,
performs journal recovery in kernelspace at mount time. An fsck.xfs shell script is provided but
does not perform any useful action as it is only there to satisfy initscript requirements.
When an XFS file system repair or check is requested, use the xfs_repair command. Use the -n
option for a read-only check.
The xfs_repair command will not operate on a file system with a dirty log. To repair such a file
system mount and unmount must first be performed to replay the log. If the log is corrupt and cannot
be replayed, the -L option can be used to zero out in the log.
For more information on file system repair of XFS file systems, see Section 12.2.2, “XFS”
Quotas
XFS quotas are not a remountable option. The -o quota option must be specified on the initial
mount for quotas to be in effect.
While the standard tools in the quota package can perform basic quota administrative tasks (tools
such as setquota and repquota), the xfs_quota tool can be used for XFS-specific features, such as
Project Quota administration.
The quotacheck command has no effect on an XFS file system. The first time quota accounting is
turned on XFS does an automatic quotacheck internally. Because XFS quota metadata is a first-
class, journaled metadata object, the quota system will always be consistent until quotas are
manually turned off.
Inode numbers
For file systems larger than 1 TB with 256-byte inodes, or larger than 2 TB with 512-byte inodes, XFS
inode numbers might exceed 2^32. Such large inode numbers cause 32-bit stat calls to fail with the
EOVERFLOW return value. The described problem might occur when using the default Red Hat
Enterprise Linux 7 configuration: non-striped with four allocation groups. A custom configuration, for
example file system extension or changing XFS file system parameters, might lead to a different
behavior.
Applications usually handle such larger inode numbers correctly. If needed, mount the XFS file
system with the -o inode32 parameter to enforce inode numbers below 2^32. Note that using
inode32 does not affect inodes that are already allocated with 64-bit numbers.
34
CHAPTER 3. THE XFS FILE SYSTEM
IMPORTANT
Do not use the inode32 option unless it is required by a specific environment. The
inode32 option changes allocation behavior. As a consequence, the ENOSPC error
might occur if no space is available to allocate inodes in the lower disk blocks.
Speculative preallocation
XFS uses speculative preallocation to allocate blocks past EOF as files are written. This avoids file
fragmentation due to concurrent streaming write workloads on NFS servers. By default, this
preallocation increases with the size of the file and will be apparent in "du" output. If a file with
speculative preallocation is not dirtied for five minutes the preallocation will be discarded. If the inode
is cycled out of cache before that time, then the preallocation will be discarded when the inode is
reclaimed.
If premature ENOSPC problems are seen due to speculative preallocation, a fixed preallocation
amount may be specified with the -o allocsize=amount mount option.
Fragmentation-related tools
Fragmentation is rarely a significant issue on XFS file systems due to heuristics and behaviors, such
as delayed allocation and speculative preallocation. However, tools exist for measuring file system
fragmentation as well as defragmenting file systems. Their use is not encouraged.
The xfs_db frag command attempts to distill all file system allocations into a single fragmentation
number, expressed as a percentage. The output of the command requires significant expertise to
understand its meaning. For example, a fragmentation factor of 75% means only an average of 4
extents per file. For this reason the output of xfs_db's frag is not considered useful and more careful
analysis of any fragmentation problems is recommended.
WARNING
The xfs_fsr command may be used to defragment individual files, or all files
on a file system. The later is especially not recommended as it may destroy
locality of files and may fragment free space.
The following table compares common commands used with ext3 and ext4 to their XFS-specific
counterparts.
Table 3.1. Common Commands for ext3 and ext4 Compared to XFS
35
Storage Administration Guide
The following table lists generic tools that function on XFS file systems as well, but the XFS versions
have more specific functionality and as such are recommended.
More information on many the listed XFS commands is included in Chapter 3, The XFS File System. You
can also consult the manual pages of the listed XFS administration tools for more information.
36
CHAPTER 4. THE EXT3 FILE SYSTEM
Availability
After an unexpected power failure or system crash (also called an unclean system shutdown), each
mounted ext2 file system on the machine must be checked for consistency by the e2fsck program.
This is a time-consuming process that can delay system boot time significantly, especially with large
volumes containing a large number of files. During this time, any data on the volumes is unreachable.
It is possible to run fsck -n on a live filesystem. However, it will not make any changes and may
give misleading results if partially written metadata is encountered.
If LVM is used in the stack, another option is to take an LVM snapshot of the filesystem and run fsck
on it instead.
Finally, there is the option to remount the filesystem as read only. All pending metadata updates (and
writes) are then forced to the disk prior to the remount. This ensures the filesystem is in a consistent
state, provided there is no previous corruption. It is now possible to run fsck -n.
The journaling provided by the ext3 file system means that this sort of file system check is no longer
necessary after an unclean system shutdown. The only time a consistency check occurs using ext3 is
in certain rare hardware failure cases, such as hard drive failures. The time to recover an ext3 file
system after an unclean system shutdown does not depend on the size of the file system or the
number of files; rather, it depends on the size of the journal used to maintain consistency. The default
journal size takes about a second to recover, depending on the speed of the hardware.
NOTE
The only journaling mode in ext3 supported by Red Hat is data=ordered (default).
Data Integrity
The ext3 file system prevents loss of data integrity in the event that an unclean system shutdown
occurs. The ext3 file system allows you to choose the type and level of protection that your data
receives. With regard to the state of the file system, ext3 volumes are configured to keep a high level
of data consistency by default.
Speed
Despite writing some data more than once, ext3 has a higher throughput in most cases than ext2
because ext3's journaling optimizes hard drive head motion. You can choose from three journaling
modes to optimize speed, but doing so means trade-offs in regards to data integrity if the system was
to fail.
NOTE
The only journaling mode in ext3 supported by Red Hat is data=ordered (default).
Easy Transition
37
Storage Administration Guide
It is easy to migrate from ext2 to ext3 and gain the benefits of a robust journaling file system without
reformatting. For more information on performing this task, see Section 4.2, “Converting to an ext3
File System” .
NOTE
Red Hat Enterprise Linux 7 provides a unified extN driver. It does this by disabling the
ext2 and ext3 configurations and instead uses ext4.ko for these on-disk formats. This
means that kernel messages will always refer to ext4 regardless of the ext file system
used.
Prerequisites
Create or reuse a partition on your disk. For information on creating MBR or GPT partitions, see
Chapter 13, Partitions.
Procedure
1. Format the partition or LVM volume with the ext3 file system using the mkfs.ext3 utility:
# mkfs.ext3 block_device
Replace block_device with the path to a partition or a logical volume. For example,
/dev/sdb1, /dev/disk/by-uuid/05e99ec8-def1-4a5e-8a9d-5945339ceb2a, or
/dev/my-volgroup/my-lv.
Configuring UUID
It is also possible to set a specific UUID for a file system. To specify a UUID when creating a file system,
use the -U option:
Replace UUID with the UUID you want to set: for example, 7cd65de3-e0be-41d9-b66d-
96d749c02da7.
Replace device with the path to an ext3 file system to have the UUID added to it: for example,
/dev/sda8.
38
CHAPTER 4. THE EXT3 FILE SYSTEM
To change the UUID of an existing file system, see Section 25.7.3.2, “Modifying Persistent Naming
Attributes”
Additional Resources
The mkfs.ext3(8) man page
NOTE
To convert ext2 to ext3, always use the e2fsck utility to check your file system before
and after using tune2fs. Before trying to convert ext2 to ext3, back up all file systems in
case any errors occur.
In addition, Red Hat recommends creating a new ext3 file system and migrating data to it,
instead of converting from ext2 to ext3 whenever possible.
To convert an ext2 file system to ext3, log in as root and type the following command in a terminal:
# tune2fs -j block_device
For simplicity, the sample commands in this section use the following value for the block device:
/dev/mapper/VolGroup00-LogVol02
# umount /dev/mapper/VolGroup00-LogVol02
2. Change the file system type to ext2 by typing the following command:
# e2fsck -y /dev/mapper/VolGroup00-LogVol02
39
Storage Administration Guide
NOTE
If a .journal file exists at the root level of the partition, delete it.
To permanently change the partition to ext2, remember to update the /etc/fstab file, otherwise it will
revert back after booting.
40
CHAPTER 5. THE EXT4 FILE SYSTEM
NOTE
As with ext3, an ext4 volume must be umounted in order to perform an fsck. For more
information, see Chapter 4, The ext3 File System.
Main Features
The ext4 file system uses extents (as opposed to the traditional block mapping scheme used by ext2
and ext3), which improves performance when using large files and reduces metadata overhead for
large files. In addition, ext4 also labels unallocated block groups and inode table sections
accordingly, which allows them to be skipped during a file system check. This makes for quicker file
system checks, which becomes more beneficial as the file system grows in size.
Allocation Features
The ext4 file system features the following allocation schemes:
Persistent pre-allocation
Delayed allocation
Multi-block allocation
Stripe-aware allocation
Because of delayed allocation and other performance optimizations, ext4's behavior of writing files to
disk is different from ext3. In ext4, when a program writes to the file system, it is not guaranteed to be
on-disk unless the program issues an fsync() call afterwards.
By default, ext3 automatically forces newly created files to disk almost immediately even without
fsync(). This behavior hid bugs in programs that did not use fsync() to ensure that written data
was on-disk. The ext4 file system, on the other hand, often waits several seconds to write out
changes to disk, allowing it to combine and reorder writes for better disk performance than ext3.
WARNING
Unlike ext3, the ext4 file system does not force data to disk on transaction
commit. As such, it takes longer for buffered writes to be flushed to disk. As with
any file system, use data integrity calls such as fsync() to ensure that data is
written to permanent storage.
41
Storage Administration Guide
Extended attributes (xattr) — This allows the system to associate several additional name
and value pairs per file.
Quota journaling — This avoids the need for lengthy quota consistency checks after a crash.
NOTE
Procedure
# mkfs.ext4 block_device
Replace block_device with the path to a partition or a logical volume. For example,
/dev/sdb1, /dev/disk/by-uuid/05e99ec8-def1-4a5e-8a9d-5945339ceb2a, or
/dev/my-volgroup/my-lv.
In general, the default options are optimal for most usage scenarios.
Below is a sample output of this command, which displays the resulting file system geometry and
features:
42
CHAPTER 5. THE EXT4 FILE SYSTEM
IMPORTANT
It is possible to use tune2fs to enable certain ext4 features on ext3 file systems.
However, using tune2fs in this way has not been fully tested and is therefore not
supported in Red Hat Enterprise Linux 7. As a result, Red Hat cannot guarantee
consistent performance and predictable behavior for ext3 file systems converted or
mounted by using tune2fs.
When creating file systems on LVM or MD volumes, mkfs.ext4 chooses an optimal geometry. This
may also be true on some hardware RAIDs which export geometry information to the operating system.
To specify stripe geometry, use the -E option of mkfs.ext4 (that is, extended file system options) with
the following sub-options:
stride=value
Specifies the RAID chunk size.
stripe-width=value
Specifies the number of data disks in a RAID device, or the number of stripe units in the stripe.
For both sub-options, value must be specified in file system block units. For example, to create a file
system with a 64k stride (that is, 16 x 4096) on a 4k-block file system, use the following command:
Configuring UUID
It is also possible to set a specific UUID for a file system. To specify a UUID when creating a file system,
use the -U option:
Replace UUID with the UUID you want to set: for example, 7cd65de3-e0be-41d9-b66d-
96d749c02da7.
43
Storage Administration Guide
Replace device with the path to an ext4 file system to have the UUID added to it: for example,
/dev/sda8.
To change the UUID of an existing file system, see Section 25.7.3.2, “Modifying Persistent Naming
Attributes”
Additional Resources
For more information about creating ext4 file systems, see:
The ext4 file system also supports several mount options to influence behavior. For example, the acl
parameter enables access control lists, while the user_xattr parameter enables user extended
attributes. To enable both options, use their respective parameters with -o, as in:
As with ext3, the option data_err=abort can be used to abort the journal if an error occurs in file data.
The tune2fs utility also allows administrators to set default mount options in the file system superblock.
For more information on this, refer to man tune2fs.
Write Barriers
By default, ext4 uses write barriers to ensure file system integrity even when power is lost to a device
with write caches enabled. For devices without write caches, or with battery-backed write caches,
disable barriers using the nobarrier option, as in:
For more information about write barriers, refer to Chapter 22, Write Barriers.
44
CHAPTER 5. THE EXT4 FILE SYSTEM
Before growing an ext4 file system, ensure that the underlying block device is of an appropriate size to
hold the file system later. Use the appropriate resizing methods for the affected block device.
An ext4 file system may be grown while mounted using the resize2fs command:
The resize2fs command can also decrease the size of an unmounted ext4 file system:
When resizing an ext4 file system, the resize2fs utility reads the size in units of file system block size,
unless a suffix indicating a specific unit is used. The following suffixes indicate specific units:
K — kilobytes
M — megabytes
G — gigabytes
NOTE
The size parameter is optional (and often redundant) when expanding. The resize2fs
automatically expands to fill all available space of the container, usually a logical volume
or partition.
For more information about resizing an ext4 file system, refer to man resize2fs.
Prerequisites
If the system has been running for a long time, run the e2fsck utility on the partitions before
backup:
# e2fsck /dev/device
1. Back up configuration information, including the content of the /etc/fstab file and the output of
the fdisk -l command. This is useful for restoring the partitions.
To capture this information, run the sosreport or sysreport utilities. For more information
about sosreport, see the What is a sosreport and how to create one in Red Hat Enterprise
Linux 4.6 and later? Kdowledgebase article.
45
Storage Administration Guide
If the partition you are backing up is an operating system partition, boot your system into the
rescue mode. See the Booting to Rescue Mode section of the System Administrator's Guide.
Although it is possible to back up a data partition while it is mounted, the results of backing
up a mounted data partition can be unpredictable.
If you need to back up a mounted file system using the dump utility, do so when the file
system is not under a heavy load. The more activity is happening on the file system when
backing up, the higher the risk of backup corruption is.
Replace backup-file with a path to a file where you want the to store the backup. Replace device
with the name of the ext4 partition you want to back up. Make sure that you are saving the
backup to a directory mounted on a different partition than the partition you are backing up.
To back up the content of the /dev/sda1, /dev/sda2, and /dev/sda3 partitions into
backup files stored in the /backup-files/ directory, use the following commands:
To do a remote backup, use the ssh utility or configure a password-less ssh login. For more
information on ssh and password-less login, see the Using the ssh Utility and Using Key-based
Authentication sections of the System Administrator's Guide.
Note that if using standard redirection, you must pass the -f option separately.
Additional Resources
For more information, see the dump(8) man page.
Prerequisites
46
CHAPTER 5. THE EXT4 FILE SYSTEM
You need a backup of partitions and their metadata, as described in Section 5.4, “Backing up
ext2, ext3, or ext4 File Systems”.
1. If you are restoring an operating system partition, boot your system into Rescue Mode. See the
Booting to Rescue Mode section of the System Administrator's Guide.
2. Rebuild the partitions you want to restore by using the fdisk or parted utilites.
If the partitions no longer exist, recreate them. The new partitions must be large enough to
contain the restored data. It is important to get the start and end numbers right; these are the
starting and ending sector numbers of the partitions obtained from the fdisk utility when
backing up.
# mkfs.ext4 /dev/device
IMPORTANT
4. If you created new partitions, re-label all the partitions so they match their entries in the
/etc/fstab file:
# mkdir /mnt/device
# mount -t ext4 /dev/device /mnt/device
# cd /mnt/device
# restore -rf device-backup-file
If you want to restore on a remote machine or restore from a backup file that is stored on a
remote host, you can use the ssh utility. For more information on ssh, see the Using the ssh
Utility section of the System Administrator's Guide.
Note that you need to configure a password-less login for the following commands. For more
information on setting up a password-less ssh login, see the Using Key-based Authentication
section of the System Administrator's Guide.
To restore a partition on a remote machine from a backup file stored on the same machine:
47
Storage Administration Guide
/usr/sbin/restore -r -f -"
To restore a partition on a remote machine from a backup file stored on a different remote
machine:
7. Reboot:
# systemctl reboot
To restore the /dev/sda1, /dev/sda2, and /dev/sda3 partitions from Example 5.2, “Backing up
Multiple ext4 Partitions”:
# mkfs.ext4 /dev/sda1
# mkfs.ext4 /dev/sda2
# mkfs.ext4 /dev/sda3
# mkdir /mnt/sda1
# mount -t ext4 /dev/sda1 /mnt/sda1
# mkdir /mnt/sda2
# mount -t ext4 /dev/sda2 /mnt/sda2
# mkdir /mnt/sda3
# mount -t ext4 /dev/sda3 /mnt/sda3
# mkdir /backup-files
# mount -t ext4 /dev/sda6 /backup-files
# cd /mnt/sda1
# restore -rf /backup-files/sda1.dump
# cd /mnt/sda2
48
CHAPTER 5. THE EXT4 FILE SYSTEM
6. Reboot:
# systemctl reboot
Additional Resources
For more information, see the restore(8) man page.
e2fsck
Used to repair an ext4 file system. This tool checks and repairs an ext4 file system more efficiently
than ext3, thanks to updates in the ext4 disk structure.
e2label
Changes the label on an ext4 file system. This tool also works on ext2 and ext3 file systems.
quota
Controls and reports on disk space (blocks) and file (inode) usage by users and groups on an ext4 file
system. For more information on using quota, refer to man quota and Section 17.1, “Configuring
Disk Quotas”.
fsfreeze
To suspend access to a file system, use the command # fsfreeze -f mount-point to freeze it
and # fsfreeze -u mount-point to unfreeze it. This halts access to the file system and creates
a stable image on disk.
NOTE
As demonstrated in Section 5.2, “Mounting an ext4 File System”, the tune2fs utility can also adjust
configurable file system parameters for ext2, ext3, and ext4 file systems. In addition, the following tools
are also useful in debugging and analyzing ext4 file systems:
debugfs
Debugs ext2, ext3, or ext4 file systems.
e2image
Saves critical ext2, ext3, or ext4 file system metadata to a file.
49
Storage Administration Guide
For more information about these utilities, refer to their respective man pages.
50
CHAPTER 6. BTRFS (TECHNOLOGY PREVIEW)
NOTE
Btrfs is available as a Technology Preview feature in Red Hat Enterprise Linux 7 but has
been deprecated since the Red Hat Enterprise Linux 7.4 release. It will be removed in a
future major release of Red Hat Enterprise Linux.
For more information, see Deprecated Functionality in the Red Hat Enterprise Linux 7.4
Release Notes.
Btrfs is a next generation Linux file system that offers advanced management, reliability, and scalability
features. It is unique in offering snapshots, compression, and integrated device management.
# mkfs.btrfs /dev/device
For more information on creating btrfs file systems with added devices and specifying multi-device
profiles for metadata and data, refer to Section 6.4, “Integrated Volume Management of Multiple
Devices”.
device=/dev/name
Appending this option to the mount command tells btrfs to scan the named device for a btrfs volume.
This is used to ensure the mount will succeed as attempting to mount devices that are not btrfs will
cause the mount to fail.
NOTE
This does not mean all devices will be added to the file system, it only scans them.
max_inline=number
Use this option to set the maximum amount of space (in bytes) that can be used to inline data within a
metadata B-tree leaf. The default is 8192 bytes. For 4k pages it is limited to 3900 bytes due to
additional headers that need to fit into the leaf.
alloc_start=number
Use this option to set where in the disk allocations start.
thread_pool=number
51
Storage Administration Guide
discard
Use this option to enable discard/TRIM on freed blocks.
noacl
Use this option to disable the use of ACL's.
space_cache
Use this option to store the free space data on disk to make caching a block group faster. This is a
persistent change and is safe to boot into old kernels.
nospace_cache
Use this option to disable the above space_cache.
clear_cache
Use this option to clear all the free space caches during mount. This is a safe option but will trigger
the space cache to be rebuilt. As such, leave the file system mounted in order to let the rebuild
process finish. This mount option is intended to be used once and only after problems are apparent
with the free space.
enospc_debug
This option is used to debug problems with "no space left".
recovery
Use this option to enable autorecovery upon mount.
NOTE
The unit size is not case specific; it accepts both G or g for GiB.
The command does not accept t for terabytes or p for petabytes. It only accepts k, m, and
g.
For example:
52
CHAPTER 6. BTRFS (TECHNOLOGY PREVIEW)
To enlarge a multi-device file system, the device to be enlarged must be specified. First, show all devices
that have a btrfs file system at a specified mount point:
For example:
Btrfs v3.16.2
Then, after identifying the devid of the device to be enlarged, use the following command:
For example:
NOTE
The amount can also be max instead of a specified amount. This will use all remaining
free space on the device.
For example:
To shrink a multi-device file system, the device to be shrunk must be specified. First, show all devices
that have a btrfs file system at a specified mount point:
For example:
53
Storage Administration Guide
Btrfs v3.16.2
Then, after identifying the devid of the device to be shrunk, use the following command:
For example:
For example:
To set the file system size of a multi-device file system, the device to be changed must be specified.
First, show all devices that have a btrfs file system at the specified mount point:
For example:
Btrfs v3.16.2
Then, after identifying the devid of the device to be changed, use the following command:
For example:
54
CHAPTER 6. BTRFS (TECHNOLOGY PREVIEW)
raid0
raid1
raid10
dup
single
The -m single option instructs that no duplication of metadata is done. This may be desired when
using hardware raid.
NOTE
Create a file system across four devices (metadata mirrored, data striped).
55
Storage Administration Guide
Use the single option to use the full capacity of each drive when the drives are different sizes.
To add a new device to an already created multi-device file system, use the following command:
After rebooting or reloading the btrfs module, use the btrfs device scan command to discover all
multi-device file systems. See Section 6.4.2, “Scanning for btrfs Devices” for more information.
The btrfs device add command is used to add new devices to a mounted file system.
The btrfs filesystem balance command balances (restripes) the allocated extents across all
existing devices.
First, create and mount a btrfs file system. Refer to Section 6.1, “Creating a btrfs File System” for
more information on how to create a btrfs file system, and to Section 6.2, “Mounting a btrfs file
system” for more information on how to mount a btrfs file system.
# mkfs.btrfs /dev/device1
# mount /dev/device1
The metadata and data on these devices are still stored only on /dev/device1. It must now be
balanced to spread across all devices.
56
CHAPTER 6. BTRFS (TECHNOLOGY PREVIEW)
Balancing a file system will take some time as it reads all of the file system's data and metadata and
rewrites it across the new device.
To convert an existing single device system, /dev/sdb1 in this case, into a two device, raid1 system
in order to protect against a single disk failure, use the following commands:
IMPORTANT
If the metadata is not converted from the single-device default, it remains as DUP. This
does not guarantee that copies of the block are on separate devices. If data is not
converted it does not have any redundant copies at all.
57
Storage Administration Guide
The command btrfs device delete missing removes the first device that is described by the file
system metadata but not present when the file system was mounted.
IMPORTANT
It is impossible to go below the minimum number of devices required for the specific raid
layout, even including the missing one. It may be required to add a new device in order to
remove the failed one.
For example, for a raid1 layout with two devices, if a device fails it is required to:
If you do not have an initrd or it does not perform a btrfs device scan, it is possible to mount a multi-
volume btrfs file system by passing all the devices in the file system explicitly to the mount command.
Note that using universally unique identifiers (UUIDs) also works and is more stable than using device
paths.
The first way is mkfs.btrfs turns off metadata duplication on a single device when
58
CHAPTER 6. BTRFS (TECHNOLOGY PREVIEW)
The second way is through a group of SSD mount options: ssd, nossd, and ssd_spread.
NOTE
The ssd mount option only enables the ssd option. Use the nossd option to disable it.
Some SSDs perform best when reusing block numbers often, while others perform much better when
clustering strictly allocates big chunks of unused space. By default, mount -o ssd will find groupings of
blocks where there are several free blocks that might have allocated blocks mixed in. The command
mount -o ssd_spread ensures there are no allocated blocks mixed in. This improves performance
on lower end SSDs.
NOTE
The ssd_spread option enables both the ssd and the ssd_spread options. Use the
nossd to disable both these options.
The ssd_spread option is never automatically set if none of the ssd options are provided
and any of the devices are non-rotational.
These options will all need to be tested with your specific build to see if their use improves or reduces
performance, as each combination of SSD firmware and application loads are different.
The man page mkfs.btrfs(8) contains information on creating a btrfs file system including all the
options regarding it.
The man page btrfsck(8) for information regarding fsck on btrfs systems.
59
Storage Administration Guide
GFS2 is based on 64-bit architecture, which can theoretically accommodate an 8 exabyte file system.
However, the current supported maximum size of a GFS2 file system is 100 TB. If a system requires
GFS2 file systems larger than 100 TB, contact your Red Hat service representative.
When determining the size of a file system, consider its recovery needs. Running the fsck command on
a very large file system can take a long time and consume a large amount of memory. Additionally, in the
event of a disk or disk-subsystem failure, recovery time is limited by the speed of backup media.
When configured in a Red Hat Cluster Suite, Red Hat GFS2 nodes can be configured and managed with
Red Hat Cluster Suite configuration and management tools. Red Hat GFS2 then provides data sharing
among GFS2 nodes in a Red Hat cluster, with a single, consistent view of the file system namespace
across the GFS2 nodes. This allows processes on different nodes to share GFS2 files in the same way
that processes on the same node can share files on a local file system, with no discernible difference.
For information about the Red Hat Cluster Suite, see Red Hat's Cluster Administration guide.
A GFS2 must be built on a logical volume (created with LVM) that is a linear or mirrored volume. Logical
volumes created with LVM in a Red Hat Cluster suite are managed with CLVM (a cluster-wide
implementation of LVM), enabled by the CLVM daemon clvmd, and running in a Red Hat Cluster Suite
cluster. The daemon makes it possible to use LVM2 to manage logical volumes across a cluster,
allowing all nodes in the cluster to share the logical volumes. For information on the Logical Volume
Manager, see Red Hat's Logical Volume Manager Administration guide.
The gfs2.ko kernel module implements the GFS2 file system and is loaded on GFS2 cluster nodes.
For comprehensive information on the creation and configuration of GFS2 file systems in clustered and
non-clustered storage, see Red Hat's Global File System 2 guide.
60
CHAPTER 8. NETWORK FILE SYSTEM (NFS)
NFS version 3 (NFSv3) supports safe asynchronous writes and is more robust at error handling
than the previous NFSv2; it also supports 64-bit file sizes and offsets, allowing clients to access
more than 2 GB of file data.
NFS version 4 (NFSv4) works through firewalls and on the Internet, no longer requires an
rpcbind service, supports ACLs, and utilizes stateful operations.
Red Hat Enterprise Linux fully supports NFS version 4.2 (NFSv4.2) since the Red Hat
Enterprise Linux 7.4 release.
Following are the features of NFSv4.2 in Red Hat Enterprise Linux 7.5 :
Server-Side Copy: NFSv4.2 supports copy_file_range() system call, which allows the NFS
client to efficiently copy data without wasting network resources.
Sparse Files: It verifies space efficiency of a file and allows placeholder to improve storage
efficiency. It is a file having one or more holes; holes are unallocated or uninitialized data blocks
consisting only of zeroes. lseek() operation in NFSv4.2, supports seek_hole() and
seek_data(), which allows application to map out the location of holes in the sparse file.
Space Reservation: It permits storage servers to reserve free space, which prohibits servers to
run out of space. NFSv4.2 supports allocate() operation to reserve space, deallocate()
operation to unreserve space, and fallocate() operation to preallocate or deallocate space
in a file.
Labeled NFS: It enforces data access rights and enables SELinux labels between a client and a
server for individual files on an NFS file system.
Layout Enhancements: NFSv4.2 provides new operation, layoutstats(), which the client can
use to notify the metadata server about its communication with the layout.
Versions of Red Hat Enterprise Linux earlier than 7.4 support NFS up to version 4.1.
Enhances performance and security of network, and also includes client-side support for Parallel
NFS (pNFS).
No longer requires a separate TCP connection for callbacks, which allows an NFS server to
grant delegations even when it cannot contact the client. For example, when NAT or a firewall
interferes.
61
Storage Administration Guide
It provides exactly once semantics (except for reboot operations), preventing a previous issue
whereby certain operations could return an inaccurate result if a reply was lost and the operation
was sent twice.
NFS clients attempt to mount using NFSv4.1 by default, and fall back to NFSv4.0 when the server does
not support NFSv4.1. The mount later fall back to NFSv3 when server does not support NFSv4.0.
NOTE
All versions of NFS can use Transmission Control Protocol (TCP) running over an IP network, with
NFSv4 requiring it. NFSv3 can use the User Datagram Protocol (UDP) running over an IP network to
provide a stateless network connection between the client and server.
When using NFSv3 with UDP, the stateless UDP connection (under normal conditions) has less protocol
overhead than TCP. This can translate into better performance on very clean, non-congested networks.
However, because UDP is stateless, if the server goes down unexpectedly, UDP clients continue to
saturate the network with requests for the server. In addition, when a frame is lost with UDP, the entire
RPC request must be retransmitted; with TCP, only the lost frame needs to be resent. For these
reasons, TCP is the preferred protocol when connecting to an NFS server.
The mounting and locking protocols have been incorporated into the NFSv4 protocol. The server also
listens on the well-known TCP port 2049. As such, NFSv4 does not need to interact with rpcbind [1],
lockd, and rpc.statd daemons. The rpc.mountd daemon is still required on the NFS server to set
up the exports, but is not involved in any over-the-wire operations.
NOTE
TCP is the default transport protocol for NFS version 3 under Red Hat Enterprise Linux.
UDP can be used for compatibility purposes as needed, but is not recommended for wide
usage. NFSv4 requires TCP.
All the RPC/NFS daemons have a '-p' command line option that can set the port,
making firewall configuration easier.
After TCP wrappers grant access to the client, the NFS server refers to the /etc/exports configuration
file to determine whether the client is allowed to access any exported file systems. Once verified, all file
and directory operations are available to the user.
IMPORTANT
In order for NFS to work with a default installation of Red Hat Enterprise Linux with a
firewall enabled, configure IPTables with the default TCP port 2049. Without proper
IPTables configuration, NFS will not function properly.
The NFS initialization script and rpc.nfsd process now allow binding to any specified
port during system start up. However, this can be error-prone if the port is unavailable, or
if it conflicts with another daemon.
62
CHAPTER 8. NETWORK FILE SYSTEM (NFS)
NFS file sharing. All NFS versions rely on Remote Procedure Calls (RPC) between clients and servers.
RPC services under Red Hat Enterprise Linux 7 are controlled by the rpcbind service. To share or
mount NFS file systems, the following services work together depending on which version of NFS is
implemented:
NOTE
The portmap service was used to map RPC program numbers to IP address port number
combinations in earlier versions of Red Hat Enterprise Linux. This service is now replaced
by rpcbind in Red Hat Enterprise Linux 7 to enable IPv6 support.
nfs
systemctl start nfs starts the NFS server and the appropriate RPC processes to service
requests for shared NFS file systems.
nfslock
systemctl start nfs-lock activates a mandatory service that starts the appropriate RPC
processes allowing NFS clients to lock files on the server.
rpcbind
rpcbind accepts port reservations from local RPC services. These ports are then made available (or
advertised) so the corresponding remote RPC services can access them. rpcbind responds to
requests for RPC services and sets up connections to the requested RPC service. This is not used
with NFSv4.
rpc.mountd
This process is used by an NFS server to process MOUNT requests from NFSv3 clients. It checks that
the requested NFS share is currently exported by the NFS server, and that the client is allowed to
access it. If the mount request is allowed, the rpc.mountd server replies with a Success status and
provides the File-Handle for this NFS share back to the NFS client.
rpc.nfsd
rpc.nfsd allows explicit NFS versions and protocols the server advertises to be defined. It works
with the Linux kernel to meet the dynamic demands of NFS clients, such as providing server threads
each time an NFS client connects. This process corresponds to the nfs service.
lockd
lockd is a kernel thread which runs on both clients and servers. It implements theNetwork Lock
Manager (NLM) protocol, which allows NFSv3 clients to lock files on the server. It is started
automatically whenever the NFS server is run and whenever an NFS file system is mounted.
rpc.statd
This process implements the Network Status Monitor (NSM) RPC protocol, which notifies NFS clients
when an NFS server is restarted without being gracefully brought down. rpc.statd is started
automatically by the nfslock service, and does not require user configuration. This is not used with
NFSv4.
63
Storage Administration Guide
rpc.rquotad
This process provides user quota information for remote users. rpc.rquotad is started
automatically by the nfs service and does not require user configuration.
rpc.idmapd
rpc.idmapd provides NFSv4 client and server upcalls, which map between on-the-wire NFSv4
names (strings in the form of user@domain) and local UIDs and GIDs. For idmapd to function with
NFSv4, the /etc/idmapd.conf file must be configured. At a minimum, the "Domain" parameter
should be specified, which defines the NFSv4 mapping domain. If the NFSv4 mapping domain is the
same as the DNS domain name, this parameter can be skipped. The client and server must agree on
the NFSv4 mapping domain for ID mapping to function properly.
NOTE
In Red Hat Enterprise Linux 7, only the NFSv4 server uses rpc.idmapd. The NFSv4
client uses the keyring-based idmapper nfsidmap. nfsidmap is a stand-alone
program that is called by the kernel on-demand to perform ID mapping; it is not a
daemon. If there is a problem with nfsidmap does the client fall back to using
rpc.idmapd. More information regarding nfsidmap can be found on the nfsidmap
man page.
8.2. PNFS
Support for Parallel NFS (pNFS) as part of the NFS v4.1 standard is available as of Red Hat
Enterprise Linux 6.4. The pNFS architecture improves the scalability of NFS, with possible improvements
to performance. That is, when a server implements pNFS as well, a client is able to access data through
multiple servers concurrently. It supports three storage protocols or layouts: files, objects, and blocks.
NOTE
The protocol allows for three possible pNFS layout types: files, objects, and blocks. While
the Red Hat Enterprise Linux 6.4 client only supported the files layout type, Red Hat
Enterprise Linux 7 supports the files layout type, with objects and blocks layout types
being included as a technology preview.
Red Hat Enterprise Linux can mount NFS shares from Flex Files servers since Red Hat Enterprise
Linux 7.4.
64
CHAPTER 8. NETWORK FILE SYSTEM (NFS)
To mount an NFS share with the Flex Files feature from a server that supports Flex Files, use
NFS version 4.2 or later:
Additional Resources
For more information on pNFS, refer to: http://www.pnfs.com.
options
A comma-delimited list of mount options; for more information on valid NFS mount options, see
Section 8.5, “Common NFS Mount Options”.
server
The hostname, IP address, or fully qualified domain name of the server exporting the file system you
wish to mount
/remote/export
The file system or directory being exported from the server, that is, the directory you wish to mount
/local/directory
The client location where /remote/export is mounted
The NFS protocol version used in Red Hat Enterprise Linux 7 is identified by the mount options
nfsvers or vers. By default, mount uses NFSv4 with mount -t nfs. If the server does not support
NFSv4, the client automatically steps down to a version supported by the server. If the nfsvers/vers
option is used to pass a particular version not supported by the server, the mount fails. The file system
type nfs4 is also available for legacy reasons; this is equivalent to running mount -t nfs -o
nfsvers=4 host:/remote/export /local/directory.
If an NFS share was mounted manually, the share will not be automatically mounted upon reboot. Red
65
Storage Administration Guide
Hat Enterprise Linux offers two methods for mounting remote file systems automatically at boot time: the
/etc/fstab file and the autofs service. For more information, see Section 8.3.1, “Mounting NFS File
Systems Using /etc/fstab” and Section 8.4, “autofs”.
An alternate way to mount an NFS share from another machine is to add a line to the /etc/fstab file.
The line must state the hostname of the NFS server, the directory on the server being exported, and the
directory on the local machine where the NFS share is to be mounted. You must be root to modify the
/etc/fstab file.
The mount point /pub must exist on the client machine before this command can be executed. After
adding this line to /etc/fstab on the client system, use the command mount /pub, and the mount
point /pub is mounted from the server.
A valid /etc/fstab entry to mount an NFS export should contain the following information:
The variables server, /remote/export, /local/directory, and options are the same ones used when
manually mounting an NFS share. For more information, see Section 8.3, “Configuring NFS Client”.
NOTE
The mount point /local/directory must exist on the client before /etc/fstab is read.
Otherwise, the mount fails.
After editing /etc/fstab, regenerate mount units so that your system registers the new configuration:
# systemctl daemon-reload
Additional Resources
8.4. AUTOFS
One drawback of using /etc/fstab is that, regardless of how infrequently a user accesses the NFS
mounted file system, the system must dedicate resources to keep the mounted file system in place. This
is not a problem with one or two mounts, but when the system is maintaining mounts to many systems at
one time, overall system performance can be affected. An alternative to /etc/fstab is to use the
kernel-based automount utility. An automounter consists of two components:
66
CHAPTER 8. NETWORK FILE SYSTEM (NFS)
The automount utility can mount and unmount NFS file systems automatically (on-demand mounting),
therefore saving system resources. It can be used to mount other file systems including AFS, SMBFS,
CIFS, and local file systems.
IMPORTANT
The nfs-utils package is now a part of both the 'NFS file server' and the 'Network File
System Client' groups. As such, it is no longer installed by default with the Base group.
Ensure that nfs-utils is installed on the system first before attempting to automount an
NFS share.
autofs uses /etc/auto.master (master map) as its default primary configuration file. This can be
changed to use another supported network source and name using the autofs configuration (in
/etc/sysconfig/autofs) in conjunction with the Name Service Switch (NSS) mechanism. An
instance of the autofs version 4 daemon was run for each mount point configured in the master map
and so it could be run manually from the command line for any given mount point. This is not possible
with autofs version 5, because it uses a single daemon to manage all configured mount points; as
such, all automounts must be configured in the master map. This is in line with the usual requirements of
other industry standard automounters. Mount point, hostname, exported directory, and options can all be
specified in a set of files (or other supported network sources) rather than configuring them manually for
each host.
67
Storage Administration Guide
For more information on the supported syntax of this file, see man nsswitch.conf. Not all NSS
databases are valid map sources and the parser will reject ones that are invalid. Valid sources are
files, yp, nis, nisplus, ldap, and hesiod.
Example 8.2. Multiple Master Map Entries per autofs Mount Point
Following is an example in the connectathon test maps for the direct mounts:
/- /tmp/auto_dcthon
/- /tmp/auto_test3_direct
/- /tmp/auto_test4_direct
mount-point
The autofs mount point, /home, for example.
map-name
The name of a map source which contains a list of mount points, and the file system location from
which those mount points should be mounted.
options
If supplied, these applies to all entries in the given map provided they do not themselves have options
specified. This behavior is different from autofs version 4 where options were cumulative. This has
been changed to implement mixed environment compatibility.
68
CHAPTER 8. NETWORK FILE SYSTEM (NFS)
The following is a sample line from /etc/auto.master file (displayed with cat
/etc/auto.master):
/home /etc/auto.misc
The general format of maps is similar to the master map, however the "options" appear between the
mount point and the location instead of at the end of the entry as in the master map:
mount-point
This refers to the autofs mount point. This can be a single directory name for an indirect mount or
the full path of the mount point for direct mounts. Each direct and indirect map entry key (mount-
point) may be followed by a space separated list of offset directories (subdirectory names each
beginning with a /) making them what is known as a multi-mount entry.
options
Whenever supplied, these are the mount options for the map entries that do not specify their own
options.
location
This refers to the file system location such as a local file system path (preceded with the Sun map
format escape character ":" for map names beginning with /), an NFS file system or other valid file
system location.
The following is a sample of contents from a map file (for example, /etc/auto.misc):
The first column in a map file indicates the autofs mount point (sales and payroll from the server
called personnel). The second column indicates the options for the autofs mount while the third
column indicates the source of the mount. Following the given configuration, the autofs mount points will
be /home/payroll and /home/sales. The -fstype= option is often omitted and is generally not
needed for correct operation.
The automounter create the directories if they do not exist. If the directories exist before the automounter
was started, the automounter will not remove them when it exits.
69
Storage Administration Guide
Using the given configuration, if a process requires access to an autofs unmounted directory such as
/home/payroll/2006/July.sxc, the automount daemon automatically mounts the directory. If a
timeout is specified, the directory is automatically unmounted if the directory is not accessed for the
timeout period.
To view the status of the automount daemon, use the following command:
Automounter maps are stored in NIS and the /etc/nsswitch.conf file has the following
directive:
+auto.master
/home auto.home
beth fileserver.example.com:/export/home/beth
joe fileserver.example.com:/export/home/joe
* fileserver.example.com:/export/home/&
Given these conditions, let's assume that the client system needs to override the NIS map auto.home
and mount home directories from a different server. In this case, the client needs to use the following
/etc/auto.master map:
/home /etc/auto.home
+auto.master
* labserver.example.com:/export/home/&
Because the automounter only processes the first occurrence of a mount point, /home contain the
contents of /etc/auto.home instead of the NIS auto.home map.
70
CHAPTER 8. NETWORK FILE SYSTEM (NFS)
Alternatively, to augment the site-wide auto.home map with just a few entries, create an
/etc/auto.home file map, and in it put the new entries. At the end, include the NISauto.home map.
Then the /etc/auto.home file map looks similar to:
mydir someserver:/export/mydir
+auto.home
With these NIS auto.home map conditions, the ls /home command outputs:
This last example works as expected because autofs does not include the contents of a file map of the
same name as the one it is reading. As such, autofs moves on to the next map source in the
nsswitch configuration.
The most recently established schema for storing automount maps in LDAP is described by
rfc2307bis. To use this schema it is necessary to set it in the autofs configuration
(/etc/sysconfig/autofs) by removing the comment characters from the schema definition. For
example:
DEFAULT_MAP_OBJECT_CLASS="automountMap"
DEFAULT_ENTRY_OBJECT_CLASS="automount"
DEFAULT_MAP_ATTRIBUTE="automountMapName"
DEFAULT_ENTRY_ATTRIBUTE="automountKey"
DEFAULT_VALUE_ATTRIBUTE="automountInformation"
Ensure that these are the only schema entries not commented in the configuration. The automountKey
replaces the cn attribute in the rfc2307bis schema. Following is an example of an LDAP Data
Interchange Format (LDIF) configuration:
# extended LDIF
#
# LDAPv3
# base <> with scope subtree
# filter: (&(objectclass=automountMap)(automountMapName=auto.master))
# requesting: ALL
#
# auto.master, example.com
dn: automountMapName=auto.master,dc=example,dc=com
objectClass: top
71
Storage Administration Guide
objectClass: automountMap
automountMapName: auto.master
# extended LDIF
#
# LDAPv3
# base <automountMapName=auto.master,dc=example,dc=com> with scope
subtree
# filter: (objectclass=automount)
# requesting: ALL
#
automountKey: /home
automountInformation: auto.home
# extended LDIF
#
# LDAPv3
# base <> with scope subtree
# filter: (&(objectclass=automountMap)(automountMapName=auto.home))
# requesting: ALL
#
# auto.home, example.com
dn: automountMapName=auto.home,dc=example,dc=com
objectClass: automountMap
automountMapName: auto.home
# extended LDIF
#
# LDAPv3
# base <automountMapName=auto.home,dc=example,dc=com> with scope subtree
# filter: (objectclass=automount)
# requesting: ALL
#
# /, auto.home, example.com
dn: automountKey=/,automountMapName=auto.home,dc=example,dc=com
objectClass: automount
automountKey: /
automountInformation: filer.example.com:/export/&
72
CHAPTER 8. NETWORK FILE SYSTEM (NFS)
Beyond mounting a file system with NFS on a remote host, it is also possible to specify other options at
mount time to make the mounted share easier to use. These options can be used with manual mount
commands, /etc/fstab settings, and autofs.
intr
Allows NFS requests to be interrupted if the server goes down or cannot be reached.
lookupcache=mode
Specifies how the kernel should manage its cache of directory entries for a given mount point. Valid
arguments for mode are all, none, or pos/positive.
nfsvers=version
Specifies which version of the NFS protocol to use, where version is 3 or 4. This is useful for hosts
that run multiple NFS servers. If no version is specified, NFS uses the highest version supported by
the kernel and mount command.
The option vers is identical to nfsvers, and is included in this release for compatibility reasons.
noacl
Turns off all ACL processing. This may be needed when interfacing with older versions of Red Hat
Enterprise Linux, Red Hat Linux, or Solaris, since the most recent ACL technology is not compatible
with older systems.
nolock
Disables file locking. This setting is sometimes required when connecting to very old NFS servers.
noexec
Prevents execution of binaries on mounted file systems. This is useful if the system is mounting a
non-Linux file system containing incompatible binaries.
nosuid
Disables set-user-identifier or set-group-identifier bits. This prevents remote users
from gaining higher privileges by running a setuid program.
port=num
Specifies the numeric value of the NFS server port. If num is 0 (the default value), then mount
queries the remote host's rpcbind service for the port number to use. If the remote host's NFS
daemon is not registered with its rpcbind service, the standard NFS port number of TCP 2049 is
used instead.
There is no fixed default value for rsize and wsize. By default, NFS uses the largest possible value
that both the server and the client support. In Red Hat Enterprise Linux 7, the client and server
maximum is 1,048,576 bytes. For more details, see the What are the default and maximum values for
rsize and wsize with NFS mounts? KBase article.
73
Storage Administration Guide
sec=mode
Its default setting is sec=sys, which uses local UNIX UIDs and GIDs. These use AUTH_SYS to
authenticate NFS operations.
sec=krb5 uses Kerberos V5 instead of local UNIX UIDs and GIDs to authenticate users.
sec=krb5i uses Kerberos V5 for user authentication and performs integrity checking of NFS
operations using secure checksums to prevent data tampering.
sec=krb5p uses Kerberos V5 for user authentication, integrity checking, and encrypts NFS traffic to
prevent traffic sniffing. This is the most secure setting, but it also involves the most performance
overhead.
tcp
Instructs the NFS mount to use the TCP protocol.
udp
Instructs the NFS mount to use the UDP protocol.
For servers that support NFSv2 or NFSv3 connections, the rpcbind[1] service must be running.
To verify that rpcbind is active, use the following command:
To configure an NFSv4-only server, which does not require rpcbind, see Section 8.7.7,
“Configuring an NFSv4-only Server”.
On Red Hat Enterprise Linux 7.0, if your NFS server exports NFSv3 and is enabled to start at
boot, you need to manually start and enable the nfs-lock service:
On Red Hat Enterprise Linux 7.1 and later, nfs-lock starts automatically if needed, and an
attempt to enable it manually fails.
Procedures
To start an NFS server, use the following command:
74
CHAPTER 8. NETWORK FILE SYSTEM (NFS)
The restart option is a shorthand way of stopping and then starting NFS. This is the most
efficient way to make configuration changes take effect after editing the configuration file for
NFS. To restart the server type:
After you edit the /etc/sysconfig/nfs file, restart the nfs-config service by running the
following command for the new values to take effect:
The try-restart command only starts nfs if it is currently running. This command is the
equivalent of condrestart (conditional restart) in Red Hat init scripts and is useful because it
does not start the daemon if NFS is not running.
To reload the NFS server configuration file without restarting the service type:
Manually editing the NFS configuration file, that is, /etc/exports, and
Through the command line, that is, by using the command exportfs
The /etc/exports file controls which file systems are exported to remote hosts and specifies options.
It follows the following syntax rules:
Any lists of authorized hosts placed after an exported file system must be separated by space
characters.
75
Storage Administration Guide
Options for each of the hosts must be placed in parentheses directly after the host identifier,
without any spaces separating the host and the first parenthesis.
Each entry for an exported file system has the following structure:
export host(options)
export
The directory being exported
host
The host or network to which the export is being shared
options
The options to be used for host
It is possible to specify multiple hosts, along with specific options for each host. To do so, list them on the
same line as a space-delimited list, with each hostname followed by its respective options (in
parentheses), as in:
For information on different methods for specifying hostnames, see Section 8.7.5, “Hostname Formats”.
In its simplest form, the /etc/exports file only specifies the exported directory and the hosts permitted
to access it, as in the following example:
/exported/directory bob.example.com
Here, bob.example.com can mount /exported/directory/ from the NFS server. Because no
options are specified in this example, NFS uses default settings.
ro
The exported file system is read-only. Remote hosts cannot change the data shared on the file
system. To allow hosts to make changes to the file system (that is, read and write), specify the rw
option.
sync
The NFS server will not reply to requests before changes made by previous requests are written to
disk. To enable asynchronous writes instead, specify the option async.
wdelay
The NFS server will delay writing to the disk if it suspects another write request is imminent. This can
76
CHAPTER 8. NETWORK FILE SYSTEM (NFS)
improve performance as it reduces the number of times the disk must be accessed by separate write
commands, thereby reducing write overhead. To disable this, specify the no_wdelay. no_wdelay is
only available if the default sync option is also specified.
root_squash
This prevents root users connected remotely (as opposed to locally) from having root privileges;
instead, the NFS server assigns them the user ID nfsnobody. This effectively "squashes" the power
of the remote root user to the lowest local user, preventing possible unauthorized writes on the remote
server. To disable root squashing, specify no_root_squash.
To squash every remote user (including root), use all_squash. To specify the user and group IDs that
the NFS server should assign to remote users from a particular host, use the anonuid and anongid
options, respectively, as in:
export host(anonuid=uid,anongid=gid)
Here, uid and gid are user ID number and group ID number, respectively. The anonuid and anongid
options allow you to create a special user and group account for remote NFS users to share.
By default, access control lists (ACLs) are supported by NFS under Red Hat Enterprise Linux. To disable
this feature, specify the no_acl option when exporting the file system.
Each default for every exported file system must be explicitly overridden. For example, if the rw option is
not specified, then the exported file system is shared as read-only. The following is a sample line from
/etc/exports which overrides two default options:
/another/exported/directory 192.168.0.3(rw,async)
In this example 192.168.0.3 can mount /another/exported/directory/ read and write and all
writes to disk are asynchronous. For more information on exporting options, see man exportfs.
Other options are available where no default value is specified. These include the ability to disable sub-
tree checking, allow access from insecure ports, and allow insecure file locks (necessary for certain early
NFS client implementations). For more information on these less-used options, see man exports.
IMPORTANT
The format of the /etc/exports file is very precise, particularly in regards to use of the
space character. Remember to always separate exported file systems from hosts and
hosts from one another with a space character. However, there should be no other space
characters in the file except on comment lines.
For example, the following two lines do not mean the same thing:
/home bob.example.com(rw)
/home bob.example.com (rw)
The first line allows only users from bob.example.com read and write access to the
/home directory. The second line allows users from bob.example.com to mount the
directory as read-only (the default), while the rest of the world can mount it read/write.
77
Storage Administration Guide
Every file system being exported to remote users with NFS, as well as the access level for those file
systems, are listed in the /etc/exports file. When the nfs service starts, the /usr/sbin/exportfs
command launches and reads this file, passes control to rpc.mountd (if NFSv3) for the actual mounting
process, then to rpc.nfsd where the file systems are then available to remote users.
When issued manually, the /usr/sbin/exportfs command allows the root user to selectively export
or unexport directories without restarting the NFS service. When given the proper options, the
/usr/sbin/exportfs command writes the exported file systems to /var/lib/nfs/xtab. Since
rpc.mountd refers to the xtab file when deciding access privileges to a file system, changes to the list
of exported file systems take effect immediately.
-r
Causes all directories listed in /etc/exports to be exported by constructing a new export list in
/etc/lib/nfs/xtab. This option effectively refreshes the export list with any changes made to
/etc/exports.
-a
Causes all directories to be exported or unexported, depending on what other options are passed to
/usr/sbin/exportfs. If no other options are specified, /usr/sbin/exportfs exports all file
systems specified in /etc/exports.
-o file-systems
Specifies directories to be exported that are not listed in /etc/exports. Replace file-systems with
additional file systems to be exported. These file systems must be formatted in the same way they
are specified in /etc/exports. This option is often used to test an exported file system before
adding it permanently to the list of file systems to be exported. For more information on
/etc/exports syntax, see Section 8.7.1, “The /etc/exports Configuration File”.
-i
Ignores /etc/exports; only options given from the command line are used to define exported file
systems.
-u
Unexports all shared directories. The command /usr/sbin/exportfs -ua suspends NFS file
sharing while keeping all NFS daemons up. To re-enable NFS sharing, use exportfs -r.
-v
Verbose operation, where the file systems being exported or unexported are displayed in greater
detail when the exportfs command is executed.
If no options are passed to the exportfs command, it displays a list of currently exported file systems.
For more information about the exportfs command, see man exportfs.
In Red Hat Enterprise Linux 7, no extra steps are required to configure NFSv4 exports as any filesystems
mentioned are automatically available to NFSv3 and NFSv4 clients using the same path. This was not
the case in previous versions.
78
CHAPTER 8. NETWORK FILE SYSTEM (NFS)
The /etc/sysconfig/nfs file does not exist by default on all systems. If /etc/sysconfig/nfs
does not exist, create it and specify the following:
RPCMOUNTDOPTS="-p port"
This adds "-p port" to the rpc.mount command line: rpc.mount -p port.
To specify the ports to be used by the nlockmgr service, set the port number for the nlm_tcpport and
nlm_udpport options in the /etc/modprobe.d/lockd.conf file.
If NFS fails to start, check /var/log/messages. Commonly, NFS fails to start if you specify a port
number that is already in use. After editing /etc/sysconfig/nfs, you need to restart the nfs-
config service for the new values to take effect in Red Hat Enterprise Linux 7.2 and prior by running:
NOTE
This process is not needed for NFSv4.1 or higher, and the other ports for mountd,
statd, and lockd are not required in a pure NFSv4 environment.
There are two ways to discover which file systems an NFS server exports.
$ showmount -e myserver
Export list for mysever
/exports/foo
/exports/bar
On any server that supports NFSv4, mount the root directory and look around.
79
Storage Administration Guide
On servers that support both NFSv4 and NFSv3, both methods work and give the same results.
NOTE
Before Red Hat Enterprise Linux 6 on older NFS servers, depending on how they are
configured, it is possible to export filesystems to NFSv4 clients at different paths. Because
these servers do not enable NFSv4 by default, this should not be a problem.
Note that rpc-rquotad is, if enabled, started automatically after starting the nfs-server
service.
3. To make the quota RPC service accessible behind a firewall, UDP or TCP port 875 need to be
open. The default port number is defined in the /etc/services file.
You can override the default port number by appending -p port-number to the
RPCRQUOTADOPTS variable in the /etc/sysconfig/rpc-rquotad file.
80
CHAPTER 8. NETWORK FILE SYSTEM (NFS)
Single machine
A fully-qualified domain name (that can be resolved by the server), hostname (that can be resolved
by the server), or an IP address.
IP networks
Use a.b.c.d/z, where a.b.c.d is the network and z is the number of bits in the netmask (for example
192.168.0.0/24). Another acceptable format is a.b.c.d/netmask, where a.b.c.d is the network and
netmask is the netmask (for example, 192.168.100.8/255.255.255.0).
Netgroups
Use the format @group-name, where group-name is the NIS netgroup name.
Note that with earlier kernel versions, a system reboot is needed after editing
/etc/rdma/rdma.conf for the changes to take effect.
81
Storage Administration Guide
When your NFS server is configured as NFSv4-only, clients attempting to mount shares using NFSv2 or
NFSv3 fail with an error like the following:
To configure your NFS server to support only NFS version 4.0 and later:
1. Disable NFSv2, NFSv3, and UDP by adding the following line to the /etc/sysconfig/nfs
configuration file:
RPCNFSDARGS="-N 2 -N 3 -U"
2. Optionally, disable listening for the RPCBIND, MOUNT, and NSM protocol calls, which are not
necessary in the NFSv4-only case.
Clients that attempt to mount shares from your server using NFSv2 or NFSv3 become
unresponsive.
The NFS server itself is unable to mount NFSv2 and NFSv3 file systems.
RPCMOUNTDOPTS="-N 2 -N 3"
The changes take effect as soon as you start or restart the NFS server.
You can verify that your NFS server is configured in the NFSv4-only mode by using the netstat utility.
The following is an example netstat output on an NFSv4-only server; listening for RPCBIND,
MOUNT, and NSM is also disabled. Here, nfs is the only listening NFS service:
# netstat -ltu
82
CHAPTER 8. NETWORK FILE SYSTEM (NFS)
In comparison, the netstat output before configuring an NFSv4-only server includes the
sunrpc and mountd services:
# netstat -ltu
83
Storage Administration Guide
First, the server restricts which hosts are allowed to mount which file systems either by IP address or by
host name.
Second, the server enforces file system permissions for users on NFS clients in the same way it does
local users. Traditionally it does this using AUTH_SYS (also called AUTH_UNIX) which relies on the client
to state the UID and GID's of the user. Be aware that this means a malicious or misconfigured client can
easily get this wrong and allow a user access to files that it should not.
To limit the potential risks, administrators often allow read-only access or squash user permissions to a
common user and group ID. Unfortunately, these solutions prevent the NFS share from being used in the
way it was originally intended.
Additionally, if an attacker gains control of the DNS server used by the system exporting the NFS file
system, the system associated with a particular hostname or fully qualified domain name can be pointed
to an unauthorized machine. At this point, the unauthorized machine is the system permitted to mount
the NFS share, since no username or password information is exchanged to provide additional security
for the NFS mount.
Wildcards should be used sparingly when exporting directories through NFS, as it is possible for the
scope of the wildcard to encompass more systems than intended.
It is also possible to restrict access to the rpcbind[1] service with TCP wrappers. Creating rules with
iptables can also limit access to ports used by rpcbind, rpc.mountd, and rpc.nfsd.
84
CHAPTER 8. NETWORK FILE SYSTEM (NFS)
For more information on securing NFS and rpcbind, refer to man iptables.
NFSv4 revolutionized NFS security by mandating the implementation of RPCSEC_GSS and the
Kerberos version 5 GSS-API mechanism. However, RPCSEC_GSS and the Kerberos mechanism are
also available for all versions of NFS. In FIPS mode, only FIPS-approved algorithms can be used.
Unlike AUTH_SYS, with the RPCSEC_GSS Kerberos mechanism, the server does not depend on the
client to correctly represent which user is accessing the file. Instead, cryptography is used to
authenticate users to the server, which prevents a malicious client from impersonating a user without
having that user's Kerberos credentials. Using the RPCSEC_GSS Kerberos mechanism is the most
straightforward way to secure mounts because after configuring Kerberos, no additional setup is
needed.
Configuring Kerberos
Before configuring an NFSv4 Kerberos-aware server, you need to install and configure a Kerberos Key
Distribution Centre (KDC). Kerberos is a network authentication system that allows clients and servers to
authenticate to each other by using symmetric encryption and a trusted third party, the KDC. Red Hat
recommends using Identity Management (IdM) for setting up Kerberos.
Procedure 8.3. Configuring an NFS Server and Client for IdM to Use RPCSEC_GSS
Create the host/hostname.domain@REALM principal on both the server and the client
side.
Add the corresponding keys to keytabs for the client and server.
For instructions, see the Adding and Editing Service Entries and Keytabs and Setting up a
Kerberos-aware NFS Server sections in the Red Hat Enterprise Linux 7 Linux Domain Identity,
Authentication, and Policy Guide.
2. On the server side, use the sec= option to enable the wanted security flavors. To enable all
security flavors as well as non-cryptographic mounts:
/export *(sec=sys:krb5:krb5i:krb5p)
3. On the client side, add sec=krb5 (or sec=krb5i, or sec=krb5p, depending on the setup) to
the mount options:
85
Storage Administration Guide
For information on how to configure a NFS client, see the Setting up a Kerberos-aware NFS
Client section in the Red Hat Enterprise Linux 7 Linux Domain Identity, Authentication, and
Policy Guide.
Although Red Hat recommends using IdM, Active Directory (AD) Kerberos servers are also supported.
For details, see the following Red Hat Knowledgebase article: How to set up NFS using Kerberos
authentication on RHEL 7 using SSSD and Active Directory.
For more information, see the exports(5) and nfs(5) manual pages, and Section 8.5, “Common NFS
Mount Options”.
For further information on the RPCSEC_GSS framework, including how gssproxy and rpc.gssd inter-
operate, see the GSSD flow description.
NFSv4 includes ACL support based on the Microsoft Windows NT model, not the POSIX model,
because of the Microsoft Windows NT model's features and wide deployment.
Another important security feature of NFSv4 is the removal of the use of the MOUNT protocol for mounting
file systems. The MOUNT protocol presented a security risk because of the way the protocol processed
file handles.
By default, access control lists (ACLs) are supported by NFS under Red Hat Enterprise Linux. Red Hat
recommends that this feature is kept enabled.
By default, NFS uses root squashing when exporting a file system. This sets the user ID of anyone
accessing the NFS share as the root user on their local machine to nobody. Root squashing is
controlled by the default option root_squash; for more information about this option, refer to
Section 8.7.1, “The /etc/exports Configuration File”. If possible, never disable root squashing.
When exporting an NFS share as read-only, consider using the all_squash option. This option makes
every user accessing the exported file system take the user ID of the nfsnobody user.
NOTE
The following section only applies to NFSv3 implementations that require the rpcbind
service for backward compatibility.
For information on how to configure an NFSv4-only server, which does not need
rpcbind, see Section 8.7.7, “Configuring an NFSv4-only Server”.
The rpcbind[1] utility maps RPC services to the ports on which they listen. RPC processes notify
rpcbind when they start, registering the ports they are listening on and the RPC program numbers they
86
CHAPTER 8. NETWORK FILE SYSTEM (NFS)
expect to serve. The client system then contacts rpcbind on the server with a particular RPC program
number. The rpcbind service redirects the client to the proper port number so it can communicate with
the requested service.
Because RPC-based services rely on rpcbind to make all connections with incoming client requests,
rpcbind must be available before any of these services start.
The rpcbind service uses TCP wrappers for access control, and access control rules for rpcbind
affect all RPC-based services. Alternatively, it is possible to specify access control rules for each of the
NFS RPC daemons. The man pages for rpc.mountd and rpc.statd contain information regarding the
precise syntax for these rules.
Because rpcbind[1] provides coordination between RPC services and the port numbers used to
communicate with them, it is useful to view the status of current RPC services using rpcbind when
troubleshooting. The rpcinfo command shows each RPC-based service with port numbers, an RPC
program number, a version number, and an IP protocol type (TCP or UDP).
To make sure the proper NFS RPC-based services are enabled for rpcbind, use the following
command:
# rpcinfo -p
87
Storage Administration Guide
If one of the NFS services does not start up correctly, rpcbind will be unable to map RPC requests from
clients for that service to the correct port. In many cases, if NFS is not present in rpcinfo output,
restarting NFS causes the service to correctly register with rpcbind and begin working.
For more information and a list of options on rpcinfo, see its man page.
Installed Documentation
man mount — Contains a comprehensive look at mount options for both NFS server and client
configurations.
man fstab — Provides detail for the format of the /etc/fstab file used to mount file systems
at boot-time.
man nfs — Provides details on NFS-specific file system export and mount options.
man exports — Shows common options used in the /etc/exports file when exporting NFS
file systems.
Useful Websites
http://linux-nfs.org — The current site for developers where project status updates can be
viewed.
http://nfs.sourceforge.net/ — The old home for developers which still contains a lot of useful
information.
Related Books
Managing NFS and NIS by Hal Stern, Mike Eisler, and Ricardo Labiaga; O'Reilly & Associates
— Makes an excellent reference guide for the many different NFS export and mount options
available.
[1] The rpcbind service replaces portmap, which was used in previous versions of Red Hat Enterprise Linux to
map RPC program numbers to IP address port number combinations. For more information, refer to Section 8.1.1,
“Required Services”.
88
CHAPTER 9. SERVER MESSAGE BLOCK (SMB)
NOTE
In the context of SMB, you sometimes read about the Common Internet File System
(CIFS) protocol, which is a dialect of SMB. Both the SMB and CIFS protocol are
supported and the kernel module and utilities involved in mounting SMB and CIFS shares
both use the name cifs.
Set and display Access Control Lists (ACL) in a security descriptor on SMB and CIFS shares
SMB 1
SMB 2.0
SMB 2.1
SMB 3.0
NOTE
Depending on the protocol version, not all SMB features are implemented.
89
Storage Administration Guide
Samba uses the CAP_UNIX capability bit in the SMB protocol to provide the UNIX extensions feature.
These extensions are also supported by the cifs.ko kernel module. However, both Samba and the
kernel module support UNIX extensions only in the SMB 1 protocol.
1. Set the server min protocol option in the [global] section in the
/etc/samba/smb.conf file to NT1. This is the default on Samba servers.
2. Mount the share using the SMB 1 protocol by providing the -o vers=1.0 option to the mount
command. For example:
By default, the kernel module uses SMB 2 or the highest later protocol version supported by the
server. Passing the -o vers=1.0 option to the mount command forces that the kernel module
uses the SMB 1 protocol that is required for using UNIX extensions.
To verify if UNIX extensions are enabled, display the options of the mounted share:
# mount
...
//server/share on /mnt type cifs (...,unix,...)
If the unix entry is displayed in the list of mount options, UNIX extensions are enabled.
In the -o options parameter, you can specify options that will be used to mount the share. For details,
see Section 9.2.6, “Frequently Used Mount Options” and the OPTIONS section in the mount.cifs(8) man
page.
90
CHAPTER 9. SERVER MESSAGE BLOCK (SMB)
IMPORTANT
To enable the system to mount a share automatically, you must store the user name,
password, and domain name in a credentials file. For details, see Section 9.2.4,
“Authenticating To an SMB Share Using a Credentials File”.
In the fourth field of the /etc/fstab file, specify mount options, such as the path to the credentials file.
For details, see Section 9.2.6, “Frequently Used Mount Options” and the OPTIONS section in the
mount.cifs(8) man page.
# mount /mnt/
1. Create a file, such as ~/smb.cred, and specify the user name, password, and domain name
that file:
username=user_name
password=password
domain=domain_name
2. Set the permissions to only allow the owner to access the file:
You can now pass the credentials=file_name mount option to the mount utility or use it in the
/etc/fstab file to mount the share without being prompted for the user name and password.
However, in certain situations, the administrator wants to mount a share automatically when the system
boots, but users should perform actions on the share's content using their own credentials. The
multiuser mount options lets you configure this scenario.
91
Storage Administration Guide
IMPORTANT
To use multiuser, you must additionally set the sec=security_type mount option to
a security type which supports providing credentials in a non-interactive way, such as
krb5 or the ntlmssp option with a credentials file. See the section called “Accessing a
Share as a User”.
The root user mounts the share using the multiuser option and an account that has minimal access
to the contents of the share. Regular users can then provide their user name and password to the current
session's kernel keyring using the cifscreds utility. If the user accesses the content of the mounted
share, the kernel uses the credentials from the kernel keyring instead of the one initially used to mount
the share.
To mount a share automatically with the multiuser option when the system boots:
Procedure 9.2. Creating an /etc/fstab File Entry with the multiuser Option
1. Create the entry for the share in the /etc/fstab file. For example:
# mount /mnt/
If you do not want to mount the share automatically when the system boots, mount it manually by
passing -o multiuser,sec=security_type to the mount command. For details about mounting an
SMB share manually, see Section 9.2.2, “Manually Mounting an SMB Share”.
# mount
...
//server_name/share_name on /mnt type cifs (sec=ntlmssp,multiuser,...)
If an SMB share is mounted with the multiuser option, users can provide their credentials for the
server to the kernel's keyring:
Now, when the user performs operations in the directory that contains the mounted SMB share, the
server applies the file system permissions for this user, instead of the one initially used when the share
was mounted.
92
CHAPTER 9. SERVER MESSAGE BLOCK (SMB)
NOTE
Multiple users can perform operations using their own credentials on the mounted share
at the same time.
How the connection will be established with the server. For example, which SMB protocol
version is used when connecting to the server.
How the share will be mounted into the local file system. For example, if the system overrides
the remote file and directory permissions to enable multiple local users to access the content on
the server.
To set multiple options in the fourth field of the /etc/fstab file or in the -o parameter of a mount
command, separate them with commas. For example, see Procedure 9.2, “Creating an /etc/fstab
File Entry with the multiuser Option”.
Option Description
credentials=file_name Sets the path to the credentials file. See Section 9.2.4, “Authenticating To an
SMB Share Using a Credentials File”.
dir_mode=mode Sets the directory mode if the server does not support CIFS UNIX
extensions.
file_mode=mode Sets the file mode if the server does not support CIFS UNIX extensions.
password=password Sets the password used to authenticate to the SMB server. Alternatively,
specify a credentials file using the credentials option.
seal Enables encryption support for connections using SMB 3.0 or a later
protocol version. Therefore, use seal together with the vers mount option
set to 3.0 or later. See Example 9.1, “Mounting a Share Using an
Encrypted SMB 3.0 Connection”.
sec=security_mode Sets the security mode, such as ntlmsspi, to enable NTLMv2 password
hashing and enabled packet signing. For a list of supported values, see the
option's description in the mount.cifs(8) man page.
If the server does not support the ntlmv2 security mode, use
sec=ntlmssp , which is the default. For security reasons, do not use the
insecure ntlm security mode.
username=user_name Sets the user name used to authenticate to the SMB server. Alternatively,
specify a credentials file using the credentials option.
93
Storage Administration Guide
Option Description
vers=SMB_protocol_version Sets the SMB protocol version used for the communication with the server.
For a complete list, see the OPTIONS section in the mount.cifs(8) man page.
94
CHAPTER 10. FS-CACHE
FS-Cache does not alter the basic operation of a file system that works over the network - it merely
provides that file system with a persistent place in which it can cache data. For instance, a client can still
mount an NFS share whether or not FS-Cache is enabled. In addition, cached NFS can handle files that
won't fit into the cache (whether individually or collectively) as files can be partially cached and do not
have to be read completely up front. FS-Cache also hides all I/O errors that occur in the cache from the
client file system driver.
95
Storage Administration Guide
To provide caching services, FS-Cache needs a cache back end. A cache back end is a storage driver
configured to provide caching services (i.e. cachefiles). In this case, FS-Cache requires a mounted
block-based file system that supports bmap and extended attributes (e.g. ext3) as its cache back end.
FS-Cache cannot arbitrarily cache any file system, whether through the network or otherwise: the shared
file system's driver must be altered to allow interaction with FS-Cache, data storage/retrieval, and
metadata setup and validation. FS-Cache needs indexing keys and coherency data from the cached file
system to support persistence: indexing keys to match file system objects to cache objects, and
coherency data to determine whether the cache objects are still valid.
NOTE
In Red Hat Enterprise Linux 7, the cachefilesd package is not installed by default and
needs to be installed manually.
For example, using FS-Cache to cache an NFS share between two computers over an otherwise
unladen GigE network will not demonstrate any performance improvements on file access. Rather, NFS
requests would be satisfied faster from server memory rather than from local disk.
The use of FS-Cache, therefore, is a compromise between various factors. If FS-Cache is being used to
cache NFS traffic, for instance, it may slow the client down a little, but massively reduce the network and
server loading by satisfying read requests locally without consuming network bandwidth.
The first setting to configure in a cache back end is which directory to use as a cache. To configure this,
use the following parameter:
$ dir /path/to/cache
$ dir /var/cache/fscache
If you want to change the cache back end directory, the selinux context must be same as
/var/cache/fscache:
96
CHAPTER 10. FS-CACHE
NOTE
If the given commands for setting selinux context did not work, use the following
commands:
FS-Cache will store the cache in the file system that hosts /path/to/cache. On a laptop, it is
advisable to use the root file system (/) as the host file system, but for a desktop machine it would be
more prudent to mount a disk partition specifically for the cache.
File systems that support functionalities required by FS-Cache cache back end include the Red Hat
Enterprise Linux 7 implementations of the following file systems:
ext4
Btrfs
XFS
The host file system must support user-defined extended attributes; FS-Cache uses these attributes to
store coherency maintenance information. To enable user-defined extended attributes for ext3 file
systems (i.e. device), use:
Alternatively, extended attributes for a file system can be enabled at mount time, as in:
The cache back end works by maintaining a certain amount of free space on the partition hosting the
cache. It grows and shrinks the cache in response to other elements of the system using up free space,
making it safe to use on the root file system (for example, on a laptop). FS-Cache sets defaults on this
behavior, which can be configured via cache cull limits. For more information about configuring cache
cull limits, refer to Section 10.4, “Setting Cache Cull Limits”.
To configure cachefilesd to start at boot time, execute the following command as root:
97
Storage Administration Guide
All access to files under /mount/point will go through the cache, unless the file is opened for direct I/O
or writing. For more information, see Section 10.3.2, “Cache Limitations with NFS”. NFS indexes cache
contents using NFS file handle, not the file name, which means hard-linked files share the cache
correctly.
Caching is supported in version 2, 3, and 4 of NFS. However, each version uses different branches for
caching.
To avoid coherency management problems between superblocks, all NFS superblocks that wish to
cache data have unique Level 2 keys. Normally, two NFS mounts with same source volume and options
share a superblock, and thus share the caching, even if they mount different directories within that
volume.
Here, /home/fred and /home/jim likely share the superblock as they have the same options,
especially if they come from the same volume/partition on the NFS server (home0). Now, consider
the next two subsequent mount commands:
In this case, /home/fred and /home/jim will not share the superblock as they have different
network access parameters, which are part of the Level 2 key. The same goes for the following mount
sequence:
Here, the contents of the two subtrees (/home/fred1 and /home/fred2) will be cached twice.
98
CHAPTER 10. FS-CACHE
Another way to avoid superblock sharing is to suppress it explicitly with the nosharecache
parameter. Using the same example:
However, in this case only one of the superblocks is permitted to use cache since there is nothing to
distinguish the Level 2 keys of home0:/disk0/fred and home0:/disk0/jim. To address this,
add a unique identifier on at least one of the mounts, i.e. fsc=unique-identifier. For example:
Here, the unique identifier jim is added to the Level 2 key used in the cache for /home/jim.
Opening a file from a shared file system for writing will not work on NFS version 2 and 3. The
protocols of these versions do not provide sufficient coherency management information for the
client to detect a concurrent write to the same file from another client.
Opening a file from a shared file system for either direct I/O or writing flushes the cached copy of
the file. FS-Cache will not cache the file again until it is no longer opened for direct I/O or writing.
Furthermore, this release of FS-Cache only caches regular NFS files. FS-Cache will not cache
directories, symlinks, device files, FIFOs and sockets.
Cache culling is done on the basis of the percentage of blocks and the percentage of files available in the
underlying file system. There are six limits controlled by settings in /etc/cachefilesd.conf:
99
Storage Administration Guide
If the amount of available space or the number of available files in the cache falls below either of
these limits, then no further allocation of disk space or files is permitted until culling has raised things
above these limits again.
brun/frun - 10%
bcull/fcull - 7%
bstop/fstop - 3%
These are the percentages of available space and available files and do not appear as 100 minus the
percentage displayed by the df program.
IMPORTANT
Culling depends on both bxxx and fxxx pairs simultaneously; they can not be treated
separately.
# cat /proc/fs/fscache/stats
FS-Cache statistics includes information on decision points and object counters. For more information,
see the following kernel document:
/usr/share/doc/kernel-
doc-version/Documentation/filesystems/caching/fscache.txt
/usr/share/doc/cachefilesd-version-number/README
/usr/share/man/man5/cachefilesd.conf.5.gz
/usr/share/man/man8/cachefilesd.8.gz
For general information about FS-Cache, including details on its design constraints, available statistics,
and capabilities, see the following kernel document: /usr/share/doc/kernel-
doc-version/Documentation/filesystems/caching/fscache.txt
100
PART II. STORAGE ADMINISTRATION
101
Storage Administration Guide
This chapter discusses several considerations when planning a storage configuration for your system.
For installation instructions (including storage configuration during installation), see the Installation Guide
provided by Red Hat.
For information on what Red Hat officially supports with regards to size and storage limits, see the article
http://www.redhat.com/resourcelibrary/articles/articles-red-hat-enterprise-linux-6-technology-capabilities-
and-limits.
For zFCP devices, you must list the device number, logical unit number (LUN), and world wide port name
(WWPN). Once the zFCP device is initialized, it is mapped to a CCW path. The FCP_x= lines on the boot
command line (or in a CMS configuration file) allow you to specify this information for the installer.
102
CHAPTER 11. STORAGE CONSIDERATIONS DURING INSTALLATION
WARNING
Removing/deleting RAID metadata from disk could potentially destroy any stored
data. Red Hat recommends that you back up your data before proceeding.
To delete RAID metadata from the disk, use the following command:
dmraid -r -E /device/
For more information about managing RAID devices, see man dmraid and Chapter 18, Redundant
Array of Independent Disks (RAID).
DASD
Direct-access storage devices (DASD) cannot be added or configured during installation. Such devices
are specified in the CMS configuration file.
This causes the I/O to later fail with a checksum error. This problem is common to all block device (or file
system-based) buffered I/O or mmap(2) I/O, so it is not possible to work around these errors caused by
overwrites.
As such, block devices with DIF/DIX enabled should only be used with applications that use O_DIRECT.
Such applications should use the raw block device. Alternatively, it is also safe to use the XFS file
system on a DIF/DIX enabled block device, as long as only O_DIRECT I/O is issued through the file
system. XFS is the only file system that does not fall back to buffered I/O when doing certain allocation
operations.
The responsibility for ensuring that the I/O data does not change after the DIF/DIX checksum has been
computed always lies with the application, so only applications designed for use with O_DIRECT I/O and
DIF/DIX hardware should use DIF/DIX.
103
Storage Administration Guide
NOTE
These file system checkers only guarantee metadata consistency across the file system;
they have no awareness of the actual data contained within the file system and are not
data recovery tools.
File system inconsistencies can occur for various reasons, including but not limited to hardware errors,
storage administration errors, and software bugs.
Before modern metadata-journaling file systems became common, a file system check was required any
time a system crashed or lost power. This was because a file system update could have been
interrupted, leading to an inconsistent state. As a result, a file system check is traditionally run on each
file system listed in /etc/fstab at boot-time. For journaling file systems, this is usually a very short
operation, because the file system's metadata journaling ensures consistency even after a crash.
However, there are times when a file system inconsistency or corruption may occur, even for journaling
file systems. When this happens, the file system checker must be used to repair the file system. The
following provides best practices and other useful information when performing this procedure.
IMPORTANT
Red Hat does not recommended this unless the machine does not boot, the file system is
extremely large, or the file system is on remote storage. It is possible to disable file
system check at boot by setting the sixth field in /etc/fstab to 0.
Dry run
Most file system checkers have a mode of operation which checks but does not repair the file system.
In this mode, the checker prints any errors that it finds and actions that it would have taken, without
actually modifying the file system.
NOTE
104
CHAPTER 12. FILE SYSTEM CHECK
contains only metadata. Because file system checkers operate only on metadata, such an image can
be used to perform a dry run of an actual file system repair, to evaluate what changes would actually
be made. If the changes are acceptable, the repair can then be performed on the file system itself.
NOTE
Severely damaged file systems may cause problems with metadata image creation.
Disk errors
File system check tools cannot repair hardware problems. A file system must be fully readable and
writable if repair is to operate successfully. If a file system was corrupted due to a hardware error, the
file system must first be moved to a good disk, for example with the dd(8) utility.
A full file system check and repair is invoked for ext2, which is not a metadata journaling file system, and
for ext4 file systems without a journal.
For ext3 and ext4 file systems with metadata journaling, the journal is replayed in userspace and the
binary exits. This is the default action as journal replay ensures a consistent file system after a crash.
If these file systems encounter metadata inconsistencies while mounted, they record this fact in the file
system superblock. If e2fsck finds that a file system is marked with such an error, e2fsck performs a
full check after replaying the journal (if present).
e2fsck may ask for user input during the run if the -p option is not specified. The -p option tells
e2fsck to automatically do all repairs that may be done safely. If user intervention is required, e2fsck
indicates the unfixed problem in its output and reflect this status in the exit code.
-n
105
Storage Administration Guide
-b superblock
Specify block number of an alternate suprerblock if the primary one is damaged.
-f
Force full check even if the superblock has no recorded errors.
-j journal-dev
Specify the external journal device, if any.
-p
Automatically repair or "preen" the file system with no user input.
-y
Assume an answer of "yes" to all questions.
All options for e2fsck are specified in the e2fsck(8) manual page.
The following five basic phases are performed by e2fsck while running:
The e2image(8) utility can be used to create a metadata image prior to repair for diagnostic or testing
purposes. The -r option should be used for testing purposes in order to create a sparse file of the same
size as the file system itself. e2fsck can then operate directly on the resulting file. The -Q option should
be specified if the image is to be archived or provided for diagnostic. This creates a more compact file
format suitable for transfer.
12.2.2. XFS
No repair is performed automatically at boot time. To initiate a file system check or repair, use the
xfs_repair tool.
NOTE
Although an fsck.xfs binary is present in the xfsprogs package, this is present only to
satisfy initscripts that look for an fsck.file system binary at boot time. fsck.xfs
immediately exits with an exit code of 0.
Older xfsprogs packages contain an xfs_check tool. This tool is very slow and does not
scale well for large file systems. As such, it has been deprecated in favor of xfs_repair
-n.
106
CHAPTER 12. FILE SYSTEM CHECK
A clean log on a file system is required for xfs_repair to operate. If the file system was not cleanly
unmounted, it should be mounted and unmounted prior to using xfs_repair. If the log is corrupt and
cannot be replayed, the -L option may be used to zero the log.
IMPORTANT
The -L option must only be used if the log cannot be replayed. The option discards all
metadata updates in the log and results in further inconsistencies.
It is possible to run xfs_repair in a dry run, check-only mode by using the -n option. No changes will
be made to the file system when this option is specified.
-n
No modify mode. Check-only operation.
-L
Zero metadata log. Use only if log cannot be replayed with mount.
-m maxmem
Limit memory used during run to maxmem MB. 0 can be specified to obtain a rough estimate of the
minimum memory required.
-l logdev
Specify the external log device, if present.
All options for xfs_repair are specified in the xfs_repair(8) manual page.
The following eight basic phases are performed by xfs_repair while running:
4. Directory checks.
5. Pathname checks.
7. Freemap checks.
xfs_repair is not interactive. All operations are performed automatically with no input from the user.
107
Storage Administration Guide
If it is desired to create a metadata image prior to repair for diagnostic or testing purposes, the
xfs_metadump(8) and xfs_mdrestore(8) utilities may be used.
12.2.3. Btrfs
NOTE
Btrfs is available as a Technology Preview feature in Red Hat Enterprise Linux 7 but has
been deprecated since the Red Hat Enterprise Linux 7.4 release. It will be removed in a
future major release of Red Hat Enterprise Linux.
For more information, see Deprecated Functionality in the Red Hat Enterprise Linux 7.4
Release Notes.
The btrfsck tool is used to check and repair btrfs file systems. This tool is still in early development
and may not detect or repair all types of file system corruption.
By default, btrfsck does not make changes to the file system; that is, it runs check-only mode by
default. If repairs are desired the --repair option must be specified.
The following three basic phases are performed by btrfsck while running:
1. Extent checks.
The btrfs-image(8) utility can be used to create a metadata image prior to repair for diagnostic or
testing purposes.
108
CHAPTER 13. PARTITIONS
The parted package is installed by default on Red Hat Enterprise Linux 7. To start parted, log in as root
and enter the following command:
# parted /dev/sda
Replace /dev/sda with the device name for the drive to configure.
If you want to remove or resize a partition, the device on which that partition resides must not be in use.
It is possible to create a new partition on a device that is in use, but this is not recommended.
The easiest way to modify disks that are currently in use is:
1. Boot the system in rescue mode if the partitions on the disk are impossible to unmount, for
example in the case of a system disk.
If the drive does not contain any partitions in use, that is there are no system processes that use or lock
the file system from being unmounted, you can unmount the partitions with the umount command and
turn off all the swap space on the hard drive with the swapoff command.
To see commonly used parted commands, see Table 13.1, “parted Commands”.
IMPORTANT
Do not use the parted utility to create file systems. Use the mkfs tool instead.
109
Storage Administration Guide
Command Description
mkpart part-type [fs-type] start-mb Make a partition without creating a new file system
end-mb
name minor-num name Name the partition for Mac and PC98 disklabels only
set minor-num flag state Set the flag on a partition; state is either on or off
1. Start parted.
(parted) print
110
CHAPTER 13. PARTITIONS
Model: ATA ST3160812AS (scsi): explains the disk type, manufacturer, model number, and
interface.
In the partition table, Number is the partition number. For example, the partition with minor
number 1 corresponds to /dev/sda1. The Start and End values are in megabytes. Valid
Types are metadata, free, primary, extended, or logical. The File system is the file system
type. The Flags column lists the flags set for the partition. Available flags are boot, root, swap,
hidden, raid, lvm, or lba.
The File system in the partition table can be any of the following:
ext2
ext3
fat16
fat32
hfs
jfs
linux-swap
ntfs
reiserfs
hp-ufs
sun-ufs
xfs
If a File system of a device shows no value, this means that its file system type is unknown.
111
Storage Administration Guide
NOTE
To select a different device without having to restart parted, use the following command
and replace /dev/sda with the device you want to select:
WARNING
1. Before creating a partition, boot into rescue mode, or unmount any partitions on the device and
turn off any swap space on the device.
2. Start parted:
# parted /dev/sda
Replace /dev/sda with the device name on which you want to create the partition.
3. View the current partition table to determine if there is enough free space:
(parted) print
If there is not enough free space, you can resize an existing partition. For more information, see
Section 13.5, “Resizing a Partition with fdisk”.
From the partition table, determine the start and end points of the new partition and what partition
type it should be. You can only have four primary partitions, with no extended partition, on a
device. If you need more than four partitions, you can have three primary partitions, one
extended partition, and multiple logical partitions within the extended. For an overview of disk
partitions, see the appendix An Introduction to Disk Partitions in the Red Hat Enterprise Linux 7
Installation Guide.
4. To create partition:
Replace part-type with with primary, logical, or extended as per your requirement.
Replace name with partition-name; name is required for GPT partition tables.
112
CHAPTER 13. PARTITIONS
Replace fs-type with any one of btrfs, ext2, ext3, ext4, fat16, fat32, hfs, hfs+, linux-swap, ntfs,
reiserfs, or xfs; fs-type is optional.
Replace start end with the size in megabytes as per your requirement.
For example, to create a primary partition with an ext3 file system from 1024 megabytes until
2048 megabytes on a hard drive, type the following command:
NOTE
If you use the mkpartfs command instead, the file system is created after the
partition is created. However, parted does not support creating an ext3 file
system. Thus, if you wish to create an ext3 file system, use mkpart and create
the file system with the mkfs command as described later.
The changes start taking place as soon as you press Enter, so review the command before
executing to it.
5. View the partition table to confirm that the created partition is in the partition table with the
correct partition type, file system type, and size using the following command:
(parted) print
Also remember the minor number of the new partition so that you can label any file systems on
it.
(parted) quit
7. Use the following command after parted is closed to make sure the kernel recognizes the new
partition:
# cat /proc/partitions
The maximum number of partitions parted can create is 128. While the GUID Partition Table (GPT)
specification allows for more partitions by growing the area reserved for the partition table, common
practice used by parted is to limit it to enough area for 128 partitions.
1. The partition does not have a file system. To create the ext4 file system, use:
# mkfs.ext4 /dev/sda6
113
Storage Administration Guide
WARNING
Formatting the partition permanently destroys any data that currently exists
on the partition.
2. Label the file system on the partition. For example, if the file system on the new partition is
/dev/sda6 and you want to label it Work, use:
By default, the installation program uses the mount point of the partition as the label to make
sure the label is unique. You can use any label you want.
1. As root, edit the /etc/fstab file to include the new partition using the partition's UUID.
Use the command blkid -o list for a complete list of the partition's UUID, or blkid
device for individual device details.
In /etc/fstab:
The first column should contain UUID= followed by the file system's UUID.
The second column should contain the mount point for the new partition.
The third column should be the file system type: for example, ext4 or swap.
The fourth column lists mount options for the file system. The word defaults here means
that the partition is mounted at boot time with default options.
The fifth and sixth field specify backup and check options. Example values for a non-root
partition are 0 2.
2. Regenerate mount units so that your system registers the new configuration:
# systemctl daemon-reload
3. Try mounting the file system to verify that the configuration works:
# mount /work
Additional Information
If you need more information about the format of /etc/fstab, see the fstab(5) man page.
114
CHAPTER 13. PARTITIONS
WARNING
Unmount any partitions on the device and turn off any swap space on the device.
# parted device
Replace device with the device on which to remove the partition: for example, /dev/sda.
3. View the current partition table to determine the minor number of the partition to remove:
(parted) print
4. Remove the partition with the command rm. For example, to remove the partition with minor
number 3:
(parted) rm 3
The changes start taking place as soon as you press Enter, so review the command before
committing to it.
5. After removing the partition, use the print command to confirm that it is removed from the
partition table:
(parted) print
(parted) quit
7. Examine the content of the /proc/partitions file to make sure the kernel knows the partition
is removed:
# cat /proc/partitions
115
Storage Administration Guide
8. Remove the partition from the /etc/fstab file. Find the line that declares the removed
partition, and remove it from the file.
9. Regenerate mount units so that your system registers the new /etc/fstab configuration:
# systemctl daemon-reload
You can start the fdisk utility and use the t command to set the partition type. The following example
shows how to change the partition type of the first partition to 0x83, default on Linux:
# fdisk /dev/sdc
Command (m for help): t
Selected partition 1
Partition type (type L to list all types): 83
Changed type of partition 'Linux LVM' to 'Linux'.
The parted utility provides some control of partition types by trying to map the partition type to 'flags',
which is not convenient for end users. The parted utility can handle only certain partition types, for
example LVM or RAID. To remove, for example, the lvm flag from the first partition with parted, use:
For a list of commonly used partition types and hexadecimal numbers used to represent them, see the
Partition Types table in the Partitions: Turning One Drive Into Many appendix of the Red Hat
Enterprise Linux 7 Installation Guide.
Before resizing a partition, back up the data stored on the file system and test the procedure, as the only
way to change a partition size using fdisk is by deleting and recreating the partition.
IMPORTANT
The partition you are resizing must be the last partition on a particular disk.
The following procedure is provided only for reference. To resize a partition using fdisk:
116
CHAPTER 13. PARTITIONS
# umount /dev/vda
# fdisk /dev/vda
Welcome to fdisk (util-linux 2.23.2).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
3. Use the p option to determine the line number of the partition to be deleted.
4. Use the d option to delete a partition. If there is more than one partition available, fdisk
prompts you to provide a number of the partition to delete:
5. Use the n option to create a partition and follow the prompts. Allow enough space for any future
resizing. The fdisk default behavior (press Enter) is to use all space on the device. You can
specify the end of the partition by sectors, or specify a human-readable size by using
+<size><suffix>, for example +500M, or +10G.
Red Hat recommends using the human-readable size specification if you do not want to use all
free space, as fdisk aligns the end of the partition with the physical sectors. If you specify the
size by providing an exact number (in sectors), fdisk does not align the end of the partition.
117
Storage Administration Guide
7. Write the changes with the w option when you are sure the changes are correct, as errors can
cause instability with the selected partition.
# e2fsck /dev/vda
e2fsck 1.41.12 (17-May-2010)
Pass 1:Checking inodes, blocks, and sizes
Pass 2:Checking directory structure
Pass 3:Checking directory connectivity
Pass 4:Checking reference counts
Pass 5:Checking group summary information
ext4-1:11/131072 files (0.0% non-contiguous),27050/524128 blocks
# mount /dev/vda
118
CHAPTER 14. CREATING AND MAINTAINING SNAPSHOTS WITH SNAPPER
The file system recommended by Red Hat with Snapper depends on your Red Hat Enterprise Linux
version:
In Red Hat Enterprise Linux 7.4 or earlier versions of Red Hat Enterprise Linux 7, use ext4 with
Snapper. Use the XFS file system on lvm-thin volumes only if you are monitoring the amount of
free space in the pool to prevent out-of-space problems that can lead to a failure.
In Red Hat Enterprise Linux 7.5 or later versions, use XFS with Snapper.
Note that the Btrfs tools and file system are provided as a Technology Preview, which make them
unsuitable for production systems.
Although it is possible to allow a user or group other than root to use certain Snapper commands,
Red Hat recommends that you do not add elevated permissions to otherwise unprivileged users or
groups. Such a configuration bypasses SELinux and could pose a security risk. Red Hat recommends
that you review these capabilities with your Security Team and consider using the sudo infrastructure
instead.
NOTE
Btrfs is available as a Technology Preview feature in Red Hat Enterprise Linux 7 but has
been deprecated since the Red Hat Enterprise Linux 7.4 release. It will be removed in a
future major release of Red Hat Enterprise Linux.
For more information, see Deprecated Functionality in the Red Hat Enterprise Linux 7.4
Release Notes.
A thinly-provisioned logical volume with a Red Hat supported file system on top of it, or
A Btrfs subvolume.
For LVM2:
119
Storage Administration Guide
For example, to create a configuration file called lvm_config on an LVM2 subvolume with an
ext4 file system, mounted at /lvm_mount, use:
For Btrfs:
The -f file_system tells snapper what file system to use; if this is omitted snapper will
attempt to detect the file system.
Pre Snapshot
A pre snapshot serves as a point of origin for a post snapshot. The two are closely tied and designed
to track file system modification between the two points. The pre snapshot must be created before
the post snapshot.
Post Snapshot
A post snapshot serves as the end point to the pre snapshot. The coupled pre and post snapshots
define a range for comparison. By default, every new snapper volume is configured to create a
background comparison after a related post snapshot is created successfully.
Single Snapshot
A single snapshot is a standalone snapshot created at a specific moment. These can be used to
track a timeline of modifications and have a general point to return to later.
120
CHAPTER 14. CREATING AND MAINTAINING SNAPSHOTS WITH SNAPPER
The -c config_name option creates a snapshot according to the specifications in the named
configuration file. If the configuration file does not yet exist, see Section 14.1, “Creating Initial Snapper
Configuration”.
The create -t option specifies what type of snapshot to create. Accepted entries are pre, post, or
single.
For example, to create a pre snapshot using the lvm_config configuration file, as created in
Section 14.1, “Creating Initial Snapper Configuration”, use:
The -p option prints the number of the created snapshot and is optional.
A post snapshot is the end point of the snapshot and should be created after the parent pre snapshot by
following the instructions in Section 14.2.1.1, “Creating a Pre Snapshot with Snapper”.
For example, to display the list of snapshots created using the configuration file lvm_config,
use the following:
The -t post option specifies the creation of the post snapshot type.
121
Storage Administration Guide
For example, to create a post snapshot using the lvm_config configuration file and is linked to
pre snapshot number 1, use:
The -p option prints the number of the created snapshot and is optional.
3. The pre and post snapshots 1 and 2 are now created and paired. Verify this with the list
command:
You can also wrap a command within a pre and post snapshot, which can be useful when testing. See
Procedure 14.3, “Wrapping a Command in Pre and Post Snapshots”, which is a shortcut for the following
steps:
2. Running a command or a list of commands to perform actions with a possible impact on the file
system content.
122
CHAPTER 14. CREATING AND MAINTAINING SNAPSHOTS WITH SNAPPER
Use the list command to verify the number of the snapshot if needed.
For more information on the status command, see Section 14.3, “Tracking Changes Between
Snapper Snapshots”.
Note that there is no guarantee that the command in the given example is the only thing the snapshots
capture. Snapper also records anything that is modified by the system, not just what a user modifies.
For example, the following command creates a single snapshot using the lvm_config configuration file.
Although single snapshots are not specifically designed to track changes, you can use the snapper
diff, xadiff, and status commands to compare any two snapshots. For more information on these
commands, see Section 14.3, “Tracking Changes Between Snapper Snapshots” .
10 hourly snapshots, and the final hourly snapshot is saved as a “daily” snapshot.
10 daily snapshots, and the final daily snapshot for a month is saved as a “monthly” snapshot.
10 monthly snapshots, and the final monthly snapshot is saved as a “yearly” snapshot.
10 yearly snapshots.
Note that Snapper keeps by default no more that 50 snapshots in total. However, Snapper keeps by
default all snapshots created less than 1,800 seconds ago.
123
Storage Administration Guide
status
The status command shows a list of files and directories that have been created, modified, or
deleted between two snapshots, that is a comprehensive list of changes between two snapshots. You
can use this command to get an overview of the changes without excessive details.
For more information, see Section 14.3.1, “Comparing Changes with the status Command”.
diff
The diff command shows a diff of modified files and directories between two snapshots as received
from the status command if there is at least one modification detected.
For more information, see Section 14.3.2, “Comparing Changes with the diff Command”.
xadiff
The xadiff command compares how the extended attributes of a file or directory have changed
between two snapshots.
For more information, see Section 14.3.3, “Comparing Changes with the xadiff Command”.
The status command shows a list of files and directories that have been created, modified, or deleted
between two snapshots.
For example, the following command displays the changes made between snapshot 1 and 2, using the
configuration file lvm_config.
Read letters and dots in the first part of the output as columns:
+..... /lvm_mount/file3
||||||
123456
124
CHAPTER 14. CREATING AND MAINTAINING SNAPSHOTS WITH SNAPPER
Column 1 indicates any modification of the file (directory entry) type. Possible values are:
Column 1
Output Meaning
+ File created.
- File deleted.
c Content changed.
Column 2 indicates any changes in the file permissions. Possible values are:
Column 2
Output Meaning
. No permissions changed.
p Permissions changed.
Column 3 indicates any changes in the user ownership. Possible values are:
Column 3
Output Meaning
Column 4 indicates any changes in the group ownership. Possible values are:
Column 4
125
Storage Administration Guide
Output Meaning
Column 5 indicates any changes in the extended attributes. Possible values are:
Column 5
Output Meaning
Column 6 indicates any changes in the access control lists (ACLs). Possible values are:
Column 6
Output Meaning
. No ACLs changed.
a ACLs modified.
The diff command shows the changes of modified files and directories between two snapshots.
Use the list command to determine the number of the snapshot if needed.
For example, to compare the changes made in files between snapshot 1 and snapshot 2 that were
made using the lvm_config configuration file, use:
This output shows that file4 had been modified to add "words" into the file.
126
CHAPTER 14. CREATING AND MAINTAINING SNAPSHOTS WITH SNAPPER
The xadiff command compares how the extended attributes of a file or directory have changed
between two snapshots:
Use the list command to determine the number of the snapshot if needed.
For example, to show the xadiff output between snapshot number 1 and snapshot number 2 that were
made using the lvm_config configuration file, use:
IMPORTANT
Using the undochange command does not revert the Snapper volume back to its original
state and does not provide data consistency. Any file modification that occurs outside of
the specified range, for example after snapshot 2, will remain unchanged after reverting
back, for example to the state of snapshot 1. For example, if undochange is run to undo
the creation of a user, any files owned by that user can still remain.
Do not use the Snapper undochange command with the root file system, as doing so is
likely to lead to a failure.
127
Storage Administration Guide
The diagram shows the point in time in which snapshot_1 is created, file_a is created, then file_b
deleted. Snapshot_2 is then created, after which file_a is edited and file_c is created. This is now
the current state of the system. The current system has an edited version of file_a, no file_b, and a
newly created file_c.
When the undochange command is called, Snapper generates a list of modified files between the first
listed snapshot and the second. In the diagram, if you use the snapper -c SnapperExample
undochange 1..2 command, Snapper creates a list of modified files (that is, file_a is created;
file_b is deleted) and applies them to the current system. Therefore:
the current system will not have file_a, as it has yet to be created when snapshot_1 was
created.
file_b will exist, copied from snapshot_1 into the current system.
file_c will exist, as its creation was outside the specified time.
Be aware that if file_b and file_c conflict, the system can become corrupted.
You can also use the snapper -c SnapperExample undochange 2..1 command. In this case,
the current system replaces the edited version of file_a with one copied from snapshot_1, which
undoes edits of that file made after snapshot_2 was created.
If needed, the mount command activates respective LVM Snapper snapshot before mounting. Use the
mount and unmount commands if you are, for example, interested in mounting snapshots and
extracting older version of several files manually. To revert files manually, copy them from a mounted
128
CHAPTER 14. CREATING AND MAINTAINING SNAPSHOTS WITH SNAPPER
snapshot to the current file system. The current file system, snapshot 0, is the live file system created in
Procedure 14.1, “Creating a Snapper Configuration File”. Copy the files to the subtree of the original
/mount-point.
Use the mount and unmount commands for explicit client-side requests. The
/etc/snapper/configs/config_name file contains the ALLOW_USERS= and ALLOW_GROUPS=
variables where you can add users and groups. Then, snapperd allows you to perform mount
operations for the added users and groups.
You can use the list command to verify that the snapshot was successfully deleted.
129
Storage Administration Guide
In years past, the recommended amount of swap space increased linearly with the amount of RAM in the
system. However, modern systems often include hundreds of gigabytes of RAM. As a consequence,
recommended swap space is considered a function of system memory workload, not system memory.
Table 15.1, “Recommended System Swap Space” illustrates the recommended size of a swap partition
depending on the amount of RAM in your system and whether you want sufficient memory for your
system to hibernate. The recommended swap partition size is established automatically during
installation. To allow for hibernation, however, you need to edit the swap space in the custom partitioning
stage.
Recommendations in Table 15.1, “Recommended System Swap Space” are especially important on
systems with low memory (1 GB and less). Failure to allocate sufficient swap space on these systems
can cause issues such as instability or even render the installed system unbootable.
Amount of RAM in the system Recommended swap space Recommended swap space if
allowing for hibernation
At the border between each range listed in Table 15.1, “Recommended System Swap Space”, for
example a system with 2 GB, 8 GB, or 64 GB of system RAM, discretion can be exercised with regard to
chosen swap space and hibernation support. If your system resources allow for it, increasing the swap
space may lead to better performance. A swap space of at least 100 GB is recommended for systems
with over 140 logical processors or over 3 TB of RAM.
Note that distributing swap space over multiple storage devices also improves swap space performance,
particularly on systems with fast drives, controllers, and interfaces.
130
CHAPTER 15. SWAP SPACE
IMPORTANT
File systems and LVM2 volumes assigned as swap space should not be in use when
being modified. Any attempts to modify swap fail if a system process or the kernel is using
swap space. Use the free and cat /proc/swaps commands to verify how much and
where swap is in use.
You should modify swap space while the system is booted in rescue mode, see Booting
Your Computer in Rescue Mode in the Red Hat Enterprise Linux 7 Installation Guide.
When prompted to mount the file system, select Skip.
You have three options: create a new swap partition, create a new swap file, or extend swap on an
existing LVM2 logical volume. It is recommended that you extend an existing logical volume.
After adding additional storage to the swap space's volume group, it is now possible to extend it. To do
so, perform the following procedure (assuming /dev/VolGroup00/LogVol01 is the volume you want
to extend by 2 GB):
# swapoff -v /dev/VolGroup00/LogVol01
# mkswap /dev/VolGroup00/LogVol01
# swapon -v /dev/VolGroup00/LogVol01
5. To test if the swap logical volume was successfully extended and activated, inspect active swap
space:
131
Storage Administration Guide
$ cat /proc/swaps
$ free -h
# mkswap /dev/VolGroup00/LogVol02
4. Regenerate mount units so that your system registers the new configuration:
# systemctl daemon-reload
# swapon -v /dev/VolGroup00/LogVol02
6. To test if the swap logical volume was successfully created and activated, inspect active swap
space:
$ cat /proc/swaps
$ free -h
1. Determine the size of the new swap file in megabytes and multiply by 1024 to determine the
number of blocks. For example, the block size of a 64 MB swap file is 65536.
Replace count with the value equal to the desired block size.
132
CHAPTER 15. SWAP SPACE
# mkswap /swapfile
5. To enable the swap file at boot time, edit /etc/fstab as root to include the following entry:
The next time the system boots, it activates the new swap file.
6. Regenerate mount units so that your system registers the new /etc/fstab configuration:
# systemctl daemon-reload
# swapon /swapfile
8. To test if the new swap file was successfully created and activated, inspect active swap space:
$ cat /proc/swaps
$ free -h
You have three options: remove an entire LVM2 logical volume used for swap, remove a swap file, or
reduce swap space on an existing LVM2 logical volume.
# swapoff -v /dev/VolGroup00/LogVol01
133
Storage Administration Guide
# mkswap /dev/VolGroup00/LogVol01
# swapon -v /dev/VolGroup00/LogVol01
5. To test if the swap logical volume was successfully reduced, inspect active swap space:
$ cat /proc/swaps
$ free -h
# swapoff -v /dev/VolGroup00/LogVol02
# lvremove /dev/VolGroup00/LogVol02
4. Regenerate mount units so that your system registers the new configuration:
# systemctl daemon-reload
5. To test if the logical volume was successfully removed, inspect active swap space:
$ cat /proc/swaps
$ free -h
1. At a shell prompt, execute the following command to disable the swap file (where /swapfile is
the swap file):
# swapoff -v /swapfile
134
CHAPTER 15. SWAP SPACE
3. Regenerate mount units so that your system registers the new configuration:
# systemctl daemon-reload
# rm /swapfile
135
Storage Administration Guide
This chapter explains how SSM interacts with various back ends and some common use cases.
There are already several back ends registered in SSM. The following sections provide basic information
on them as well as definitions on how they handle pools, volumes, snapshots, and devices.
NOTE
Btrfs is available as a Technology Preview feature in Red Hat Enterprise Linux 7 but has
been deprecated since the Red Hat Enterprise Linux 7.4 release. It will be removed in a
future major release of Red Hat Enterprise Linux.
For more information, see Deprecated Functionality in the Red Hat Enterprise Linux 7.4
Release Notes.
Btrfs, a file system with many advanced features, is used as a volume management back end in SSM.
Pools, volumes, and snapshots can be created with the Btrfs back end.
The Btrfs file system itself is the pool. It can be extended by adding more devices or shrunk by removing
devices. SSM creates a Btrfs file system when a Btrfs pool is created. This means that every new Btrfs
pool has one volume of the same name as the pool which cannot be removed without removing the
entire pool. The default Btrfs pool name is btrfs_pool.
The name of the pool is used as the file system label. If there is already an existing Btrfs file system in
the system without a label, the Btrfs pool will generate a name for internal use in the format of
btrfs_device_base_name.
Volumes created after the first volume in a pool are the same as sub-volumes. SSM temporarily mounts
the Btrfs file system if it is unmounted in order to create a sub-volume.
136
CHAPTER 16. SYSTEM STORAGE MANAGER (SSM)
The name of a volume is used as the subvolume path in the Btrfs file system. For example, a subvolume
displays as /dev/lvm_pool/lvol001. Every object in this path must exist in order for the volume to
be created. Volumes can also be referenced with its mount point.
Snapshots can be taken of any Btrfs volume in the system with SSM. Be aware that Btrfs does not
distinguish between subvolumes and snapshots. While this means that SSM cannot recognize the Btrfs
snapshot destination, it will try to recognize special name formats. If the name specified when creating
the snapshot does the specific pattern, the snapshot is not be recognized and instead be listed as a
regular Btrfs volume.
LVM pool is the same as an LVM volume group. This means that grouping devices and new logical
volumes can be created out of the LVM pool. The default LVM pool name is lvm_pool.
When a snapshot is created from the LVM volume a new snapshot volume is created which can then
be handled just like any other LVM volume. Unlike Btrfs, LVM is able to distinguish snapshots from
regular volumes so there is no need for a snapshot name to match a particular pattern.
SSM makes the need for an LVM back end to be created on a physical device transparent for the user.
Only volumes can be created with a crypt back end; pooling is not supported and it does not require
special devices.
The following sections define volumes and snapshots from the crypt point of view.
137
Storage Administration Guide
Crypt volumes are created by dm-crypt and represent the data on the original encrypted device in an
unencrypted form. It does not support RAID or any device concatenation.
Two modes, or extensions, are supported: luks and plain. Luks is used by default. For more information
on the extensions, see man cryptsetup.
While the crypt back end does not support snapshotting, if the encrypted volume is created on top of an
LVM volume, the volume itself can be snapshotted. The snapshot can then be opened by using
cryptsetup.
There are several back ends that are enabled only if the supporting packages are installed:
The Crypt back end requires the device-mapper and cryptsetup packages.
# ssm list
----------------------------------------------------------
Device Free Used Total Pool Mount point
----------------------------------------------------------
/dev/sda 2.00 GB PARTITIONED
/dev/sda1 47.83 MB /test
/dev/vda 15.00 GB PARTITIONED
/dev/vda1 500.00 MB /boot
/dev/vda2 0.00 KB 14.51 GB 14.51 GB rhel
----------------------------------------------------------
------------------------------------------------
Pool Type Devices Free Used Total
------------------------------------------------
rhel lvm 1 0.00 KB 14.51 GB 14.51 GB
138
CHAPTER 16. SYSTEM STORAGE MANAGER (SSM)
------------------------------------------------
----------------------------------------------------------------------
-----------
Volume Pool Volume size FS FS size Free Type
Mount point
----------------------------------------------------------------------
-----------
/dev/rhel/root rhel 13.53 GB xfs 13.52 GB 9.64 GB linear /
/dev/rhel/swap rhel 1000.00 MB linear
/dev/sda1 47.83 MB xfs 44.50 MB 44.41 MB part
/test
/dev/vda1 500.00 MB xfs 496.67 MB 403.56 MB part
/boot
----------------------------------------------------------------------
-----------
This display can be further narrowed down by using arguments to specify what should be displayed. The
list of available options can be found with the ssm list --help command.
NOTE
Running the devices or dev argument omits some devices. CDRoms and
DM/MD devices, for example, are intentionally hidden as they are listed as
volumes.
Some back ends do not support snapshots and cannot distinguish between a
snapshot and a regular volume. Running the snapshot argument on one of
these back ends cause SSM to attempt to recognize the volume name in order to
identify a snapshot. If the SSM regular expression does not match the snapshot
pattern then the snapshot is not be recognized.
With the exception of the main Btrfs volume (the file system itself), any
unmounted Btrfs volumes are not shown.
The command to create this scenario is ssm create --fs xfs -s 1G /dev/vdb /dev/vdc. The
following options are used:
The --fs option specifies the required file system type. Current supported file system types are:
ext3
ext4
xfs
btrfs
139
Storage Administration Guide
The -s specifies the size of the logical volume. The following suffixes are supported to define
units:
K or k for kilobytes
M or m for megabytes
G or g for gigabytes
T or t for terabytes
P or p for petabytes
E or e for exabytes
The two listed devices, /dev/vdb and /dev/vdc, are the two devices you wish to create.
There are two other options for the ssm command that may be useful. The first is the -p pool
command. This specifies the pool the volume is to be created on. If it does not yet exist, then SSM
creates it. This was omitted in the given example which caused SSM to use the default name
lvm_pool. However, to use a specific name to fit in with any existing naming conventions, the -p
option should be used.
The second useful option is the -n name command. This names the newly created logical volume. As
with the -p, this is needed in order to use a specific name to fit in with any existing naming conventions.
SSM has now created two physical volumes, a pool, and a logical volume with the ease of only one
command.
To check all devices in the volume lvol001, run the command ssm check
/dev/lvm_pool/lvol001.
140
CHAPTER 16. SYSTEM STORAGE MANAGER (SSM)
For this example, we currently have one logical volume on /dev/vdb that is 900MB called lvol001.
# ssm list
-----------------------------------------------------------------
Device Free Used Total Pool Mount point
-----------------------------------------------------------------
/dev/vda 15.00 GB PARTITIONED
/dev/vda1 500.00 MB /boot
/dev/vda2 0.00 KB 14.51 GB 14.51 GB rhel
/dev/vdb 120.00 MB 900.00 MB 1.00 GB lvm_pool
/dev/vdc 1.00 GB
-----------------------------------------------------------------
---------------------------------------------------------
Pool Type Devices Free Used Total
---------------------------------------------------------
lvm_pool lvm 1 120.00 MB 900.00 MB 1020.00 MB
rhel lvm 1 0.00 KB 14.51 GB 14.51 GB
---------------------------------------------------------
----------------------------------------------------------------------
----------------------
141
Storage Administration Guide
The logical volume needs to be increased by another 500MB. To do so we will need to add an extra
device to the pool:
142
CHAPTER 16. SYSTEM STORAGE MANAGER (SSM)
SSM runs a check on the device and then extends the volume by the specified amount. This can be
verified with the ssm list command.
# ssm list
------------------------------------------------------------------
Device Free Used Total Pool Mount point
------------------------------------------------------------------
/dev/vda 15.00 GB PARTITIONED
/dev/vda1 500.00 MB /boot
/dev/vda2 0.00 KB 14.51 GB 14.51 GB rhel
/dev/vdb 0.00 KB 1020.00 MB 1.00 GB lvm_pool
/dev/vdc 640.00 MB 380.00 MB 1.00 GB lvm_pool
------------------------------------------------------------------
------------------------------------------------------
Pool Type Devices Free Used Total
------------------------------------------------------
lvm_pool lvm 2 640.00 MB 1.37 GB 1.99 GB
rhel lvm 1 0.00 KB 14.51 GB 14.51 GB
------------------------------------------------------
----------------------------------------------------------------------
------------------------
Volume Pool Volume size FS FS size
Free Type Mount point
----------------------------------------------------------------------
------------------------
/dev/rhel/root rhel 13.53 GB xfs 13.52 GB 9.64 GB
linear /
/dev/rhel/swap rhel 1000.00 MB
linear
/dev/lvm_pool/lvol001 lvm_pool 1.37 GB xfs 1.36 GB 1.36 GB
linear
/dev/vda1 500.00 MB xfs 496.67 MB 403.56 MB
part /boot
----------------------------------------------------------------------
------------------------
143
Storage Administration Guide
NOTE
It is only possible to decrease an LVM volume's size; it is not supported with other volume
types. This is done by using a - instead of a +. For example, to decrease the size of an
LVM volume by 50M the command would be:
16.2.6. Snapshot
To take a snapshot of an existing volume, use the ssm snapshot command.
NOTE
This operation fails if the back end that the volume belongs to does not support
snapshotting.
To verify this, use the ssm list, and note the extra snapshot section.
# ssm list
----------------------------------------------------------------
Device Free Used Total Pool Mount point
----------------------------------------------------------------
/dev/vda 15.00 GB PARTITIONED
/dev/vda1 500.00 MB /boot
/dev/vda2 0.00 KB 14.51 GB 14.51 GB rhel
/dev/vdb 0.00 KB 1020.00 MB 1.00 GB lvm_pool
/dev/vdc 1.00 GB
----------------------------------------------------------------
--------------------------------------------------------
Pool Type Devices Free Used Total
--------------------------------------------------------
lvm_pool lvm 1 0.00 KB 1020.00 MB 1020.00 MB
rhel lvm 1 0.00 KB 14.51 GB 14.51 GB
--------------------------------------------------------
----------------------------------------------------------------------
------------------------
Volume Pool Volume size FS FS size
Free Type Mount point
----------------------------------------------------------------------
144
CHAPTER 16. SYSTEM STORAGE MANAGER (SSM)
------------------------
/dev/rhel/root rhel 13.53 GB xfs 13.52 GB 9.64 GB
linear /
/dev/rhel/swap rhel 1000.00 MB
linear
/dev/lvm_pool/lvol001 lvm_pool 900.00 MB xfs 896.67 MB 896.54 MB
linear
/dev/vda1 500.00 MB xfs 496.67 MB 403.56 MB
part /boot
----------------------------------------------------------------------
------------------------
----------------------------------------------------------------------
------------
Snapshot Origin Pool Volume size
Size Type
----------------------------------------------------------------------
------------
/dev/lvm_pool/snap20150519T130900 lvol001 lvm_pool 120.00 MB 0.00 KB
linear
----------------------------------------------------------------------
------------
NOTE
If a device is being used by a pool when removed, it will fail. This can be forced using the
-f argument.
If the volume is mounted when removed, it will fail. Unlike the device, it cannot be forced
with the -f argument.
To remove the lvm_pool and everything within it use the following command:
The man ssm page provides good descriptions and examples, as well as details on all of the
commands and options too specific to be documented here.
145
Storage Administration Guide
146
CHAPTER 17. DISK QUOTAS
Disk quotas can be configured for individual users as well as user groups. This makes it possible to
manage the space allocated for user-specific files (such as email) separately from the space allocated to
the projects a user works on (assuming the projects are given their own groups).
In addition, quotas can be set not just to control the number of disk blocks consumed but to control the
number of inodes (data structures that contain information about files in UNIX file systems). Because
inodes are used to contain file-related information, this allows control over the number of files that can be
created.
NOTE
This chapter is for all file systems, however some file systems have their own quota
management tools. See the corresponding description for the applicable file systems.
For XFS file systems, see Section 3.3, “XFS Quota Management”.
3. Create the quota database files and generate the disk usage table.
1. Log in as root.
3. Add either the usrquota or grpquota or both options to the file systems that require quotas.
For example, to use the text editor vim type the following:
# vim /etc/fstab
147
Storage Administration Guide
In this example, the /home file system has both user and group quotas enabled.
NOTE
The following examples assume that a separate /home partition was created during the
installation of Red Hat Enterprise Linux. The root (/) partition can be used for setting
quota policies in the /etc/fstab file.
Run the umount command followed by the mount command to remount the file system. See the
man page for both umount and mount for the specific syntax for mounting and unmounting
various file system types.
Run the mount -o remount file-system command (where file-system is the name of
the file system) to remount the file system. For example, to remount the /home file system, run
the mount -o remount /home command.
If the file system is currently in use, the easiest method for remounting the file system is to reboot the
system.
The quotacheck command examines quota-enabled file systems and builds a table of the current disk
usage per file system. The table is then used to update the operating system's copy of disk usage. In
addition, the file system's disk quota files are updated.
NOTE
The quotacheck command has no effect on XFS as the table of disk usage is completed
automatically at mount time. See the man page xfs_quota(8) for more information.
148
CHAPTER 17. DISK QUOTAS
1. Create the quota files on the file system using the following command:
2. Generate the table of current disk usage per file system using the following command:
# quotacheck -avug
c
Specifies that the quota files should be created for each file system with quotas enable.
u
Checks for user quotas.
g
Checks for group quotas. If only -g is specified, only the group quota file is created.
If neither the -u or -g options are specified, only the user quota file is created.
The following options are used to generate the table of current disk usage:
a
Check all quota-enabled, locally-mounted file systems
v
Display verbose status information as the quota check proceeds
u
Check user disk quota information
g
Check group disk quota information
After quotacheck has finished running, the quota files corresponding to the enabled quotas (either user
or group or both) are populated with data for each quota-enabled locally-mounted file system such as
/home.
Prerequisite
149
Storage Administration Guide
# edquota username
Replace username with the user to which you want to assign the quotas.
2. To verify that the quota for the user has been set, use the following command:
# quota username
NOTE
The text editor defined by the EDITOR environment variable is used by edquota. To
change the editor, set the EDITOR environment variable in your ~/.bash_profile file
to the full path of the editor of your choice.
The first column is the name of the file system that has a quota enabled for it. The second column shows
how many blocks the user is currently using. The next two columns are used to set soft and hard block
limits for the user on the file system. The inodes column shows how many inodes the user is currently
using. The last two columns are used to set the soft and hard inode limits for the user on the file system.
The hard block limit is the absolute maximum amount of disk space that a user or group can use. Once
this limit is reached, no further disk space can be used.
The soft block limit defines the maximum amount of disk space that can be used. However, unlike the
hard limit, the soft limit can be exceeded for a certain amount of time. That time is known as the grace
period. The grace period can be expressed in seconds, minutes, hours, days, weeks, or months.
If any of the values are set to 0, that limit is not set. In the text editor, change the desired limits.
For example:
150
CHAPTER 17. DISK QUOTAS
To verify that the quota for the user has been set, use the command:
# quota testuser
Disk quotas for user username (uid 501):
Filesystem blocks quota limit grace files quota limit
grace
/dev/sdb 1000* 1000 1000 0 0 0
Prerequisite
# edquota -g groupname
2. To verify that the group quota is set, use the following command:
# quota -g groupname
For example, to set a group quota for the devel group, use the command:
# edquota -g devel
This command displays the existing quota for the group in the text editor:
To verify that the group quota has been set, use the command:
# quota -g devel
151
Storage Administration Guide
# edquota -t
This command works on quotas for inodes or blocks, for either users or groups.
IMPORTANT
While other edquota commands operate on quotas for a particular user or group, the -t
option operates on every file system with quotas enabled.
If users repeatedly exceed their quotas or consistently reach their soft limits, a system administrator has
a few choices to make depending on what type of users they are and how much disk space impacts their
work. The administrator can either help the user determine how to use less disk space or increase the
user's disk quota.
# quotaoff -vaug
If neither the -u or -g options are specified, only the user quotas are disabled. If only -g is specified,
only group quotas are disabled. The -v switch causes verbose status information to display as the
command executes.
To enable user and group quotas again, use the following command:
# quotaon
To enable user and group quotas for all file systems, use the following command:
# quotaon -vaug
If neither the -u or -g options are specified, only the user quotas are enabled. If only -g is specified,
only group quotas are enabled.
To enable quotas for a specific file system, such as /home, use the following command:
152
CHAPTER 17. DISK QUOTAS
NOTE
The quotaon command is not always needed for XFS because it is performed
automatically at mount time. Refer to the man page quotaon(8) for more information.
To view the disk usage report for all (option -a) quota-enabled file systems, use the command:
# repquota -a
While the report is easy to read, a few points should be explained. The -- displayed after each user is a
quick way to determine whether the block or inode limits have been exceeded. If either soft limit is
exceeded, a + appears in place of the corresponding -; the first - represents the block limit, and the
second represents the inode limit.
The grace columns are normally blank. If a soft limit has been exceeded, the column contains a time
specification equal to the amount of time remaining on the grace period. If the grace period has expired,
none appears in its place.
# quotacheck
However, quotacheck can be run on a regular basis, even if the system has not crashed. Safe
methods for periodically running quotacheck include:
NOTE
This method works best for (busy) multiuser systems which are periodically rebooted.
153
Storage Administration Guide
Save a shell script into the /etc/cron.daily/ or /etc/cron.weekly/ directory or schedule one
using the following command:
# crontab -e
The crontab -e command contains the touch /forcequotacheck command. This creates an
empty forcequotacheck file in the root directory, which the system init script looks for at boot time.
If it is found, the init script runs quotacheck. Afterward, the init script removes the
/forcequotacheck file; thus, scheduling this file to be created periodically with cron ensures that
quotacheck is run during the next reboot.
WARNING
quotacheck
edquota
repquota
154
CHAPTER 17. DISK QUOTAS
quota
quotaon
quotaoff
155
Storage Administration Guide
RAID allows information to be spread across several disks. RAID uses techniques such as disk striping
(RAID Level 0), disk mirroring (RAID Level 1), and disk striping with parity (RAID Level 5) to achieve
redundancy, lower latency, increased bandwidth, and maximized ability to recover from hard disk
crashes.
RAID distributes data across each drive in the array by breaking it down into consistently-sized chunks
(commonly 256K or 512k, although other values are acceptable). Each chunk is then written to a hard
drive in the RAID array according to the RAID level employed. When the data is read, the process is
reversed, giving the illusion that the multiple drives in the array are actually one large drive.
System Administrators and others who manage large amounts of data would benefit from using RAID
technology. Primary reasons to deploy RAID include:
Enhances speed
Firmware RAID
Firmware RAID, also known as ATARAID, is a type of software RAID where the RAID sets can be
configured using a firmware-based menu. The firmware used by this type of RAID also hooks into the
BIOS, allowing you to boot from its RAID sets. Different vendors use different on-disk metadata formats
to mark the RAID set members. The Intel Matrix RAID is a good example of a firmware RAID system.
Hardware RAID
The hardware-based array manages the RAID subsystem independently from the host. It presents a
single disk per RAID array to the host.
A Hardware RAID device may be internal or external to the system, with internal devices commonly
consisting of a specialized controller card that handles the RAID tasks transparently to the operating
system and with external devices commonly connecting to the system via SCSI, Fibre Channel, iSCSI,
InfiniBand, or other high speed network interconnect and presenting logical volumes to the system.
RAID controller cards function like a SCSI controller to the operating system, and handle all the actual
drive communications. The user plugs the drives into the RAID controller (just like a normal SCSI
controller) and then adds them to the RAID controllers configuration. The operating system will not be
able to tell the difference.
Software RAID
156
CHAPTER 18. REDUNDANT ARRAY OF INDEPENDENT DISKS (RAID)
Software RAID implements the various RAID levels in the kernel disk (block device) code. It offers the
cheapest possible solution, as expensive disk controller cards or hot-swap chassis [2] are not required.
Software RAID also works with cheaper IDE disks as well as SCSI disks. With today's faster CPUs,
Software RAID also generally outperforms Hardware RAID.
The Linux kernel contains a multi-disk (MD) driver that allows the RAID solution to be completely
hardware independent. The performance of a software-based array depends on the server CPU
performance and load.
Multithreaded design
Automatic CPU detection to take advantage of certain CPU features such as streaming SIMD
support
Regular consistency checks of RAID data to ensure the health of the array
Proactive monitoring of arrays with email alerts sent to a designated email address on important
events
Write-intent bitmaps which drastically increase the speed of resync events by allowing the kernel
to know precisely which portions of a disk need to be resynced instead of having to resync the
entire array
Resync checkpointing so that if you reboot your computer during a resync, at startup the resync
will pick up where it left off and not start all over again
The ability to change parameters of the array after installation. For example, you can grow a 4-
disk RAID5 array to a 5-disk RAID5 array when you have a new disk to add. This grow operation
is done live and does not require you to reinstall on the new array.
Level 0
RAID level 0, often called "striping," is a performance-oriented striped data mapping technique. This
means the data being written to the array is broken down into strips and written across the member
disks of the array, allowing high I/O performance at low inherent cost but provides no redundancy.
Many RAID level 0 implementations will only stripe the data across the member devices up to the size
of the smallest device in the array. This means that if you have multiple devices with slightly different
sizes, each device will get treated as though it is the same size as the smallest drive. Therefore, the
common storage capacity of a level 0 array is equal to the capacity of the smallest member disk in a
Hardware RAID or the capacity of smallest member partition in a Software RAID multiplied by the
number of disks or partitions in the array.
157
Storage Administration Guide
Level 1
RAID level 1, or "mirroring," has been used longer than any other form of RAID. Level 1 provides
redundancy by writing identical data to each member disk of the array, leaving a "mirrored" copy on
each disk. Mirroring remains popular due to its simplicity and high level of data availability. Level 1
operates with two or more disks, and provides very good data reliability and improves performance
for read-intensive applications but at a relatively high cost. [3]
The storage capacity of the level 1 array is equal to the capacity of the smallest mirrored hard disk in
a Hardware RAID or the smallest mirrored partition in a Software RAID. Level 1 redundancy is the
highest possible among all RAID types, with the array being able to operate with only a single disk
present.
Level 4
Level 4 uses parity [4] concentrated on a single disk drive to protect data. Because the dedicated
parity disk represents an inherent bottleneck on all write transactions to the RAID array, level 4 is
seldom used without accompanying technologies such as write-back caching, or in specific
circumstances where the system administrator is intentionally designing the software RAID device
with this bottleneck in mind (such as an array that will have little to no write transactions once the
array is populated with data). RAID level 4 is so rarely used that it is not available as an option in
Anaconda. However, it could be created manually by the user if truly needed.
The storage capacity of Hardware RAID level 4 is equal to the capacity of the smallest member
partition multiplied by the number of partitions minus one. Performance of a RAID level 4 array will
always be asymmetrical, meaning reads will outperform writes. This is because writes consume extra
CPU and main memory bandwidth when generating parity, and then also consume extra bus
bandwidth when writing the actual data to disks because you are writing not only the data, but also the
parity. Reads need only read the data and not the parity unless the array is in a degraded state. As a
result, reads generate less traffic to the drives and across the busses of the computer for the same
amount of data transfer under normal operating conditions.
Level 5
This is the most common type of RAID. By distributing parity across all of an array's member disk
drives, RAID level 5 eliminates the write bottleneck inherent in level 4. The only performance
bottleneck is the parity calculation process itself. With modern CPUs and Software RAID, that is
usually not a bottleneck at all since modern CPUs can generate parity very fast. However, if you have
a sufficiently large number of member devices in a software RAID5 array such that the combined
aggregate data transfer speed across all devices is high enough, then this bottleneck can start to
come into play.
As with level 4, level 5 has asymmetrical performance, with reads substantially outperforming writes.
The storage capacity of RAID level 5 is calculated the same way as with level 4.
Level 6
This is a common level of RAID when data redundancy and preservation, and not performance, are
the paramount concerns, but where the space inefficiency of level 1 is not acceptable. Level 6 uses a
complex parity scheme to be able to recover from the loss of any two drives in the array. This
complex parity scheme creates a significantly higher CPU burden on software RAID devices and also
imposes an increased burden during write transactions. As such, level 6 is considerably more
asymmetrical in performance than levels 4 and 5.
The total capacity of a RAID level 6 array is calculated similarly to RAID level 5 and 4, except that you
must subtract 2 devices (instead of 1) from the device count for the extra parity storage space.
158
CHAPTER 18. REDUNDANT ARRAY OF INDEPENDENT DISKS (RAID)
Level 10
This RAID level attempts to combine the performance advantages of level 0 with the redundancy of
level 1. It also helps to alleviate some of the space wasted in level 1 arrays with more than 2 devices.
With level 10, it is possible to create a 3-drive array configured to store only 2 copies of each piece of
data, which then allows the overall array size to be 1.5 times the size of the smallest devices instead
of only equal to the smallest device (like it would be with a 3-device, level 1 array).
The number of options available when creating level 10 arrays as well as the complexity of selecting
the right options for a specific use case make it impractical to create during installation. It is possible
to create one manually using the command line mdadm tool. For more information on the options and
their respective performance trade-offs, see man md.
Linear RAID
Linear RAID is a grouping of drives to create a larger virtual drive. In linear RAID, the chunks are
allocated sequentially from one member drive, going to the next drive only when the first is completely
filled. This grouping provides no performance benefit, as it is unlikely that any I/O operations split
between member drives. Linear RAID also offers no redundancy and decreases reliability; if any one
member drive fails, the entire array cannot be used. The capacity is the total of all member disks.
mdraid
The mdraid subsystem was designed as a software RAID solution for Linux; it is also the preferred
solution for software RAID under Linux. This subsystem uses its own metadata format, generally
referred to as native mdraid metadata.
mdraid also supports other metadata formats, known as external metadata. Red Hat Enterprise Linux 7
uses mdraid with external metadata to access ISW / IMSM (Intel firmware RAID) sets. mdraid sets are
configured and controlled through the mdadm utility.
dmraid
Device-mapper RAID or dmraid refers to device-mapper kernel code that offers the mechanism to piece
disks together into a RAID set. This same kernel code does not provide any RAID configuration
mechanism.
dmraid is configured entirely in user-space, making it easy to support various on-disk metadata formats.
As such, dmraid is used on a wide variety of firmware RAID implementations. dmraid also supports
Intel firmware RAID, although Red Hat Enterprise Linux 7 uses mdraid to access Intel firmware RAID
sets.
159
Storage Administration Guide
The Anaconda installer automatically detects any hardware and firmware RAID sets on a system,
making them available for installation. Anaconda also supports software RAID using mdraid, and can
recognize existing mdraid sets.
Anaconda provides utilities for creating RAID sets during installation; however, these utilities only allow
partitions (as opposed to entire disks) to be members of new sets. To use an entire disk for a set, create
a partition on it spanning the entire disk, and use that partition as the RAID set member.
When the root file system uses a RAID set, Anaconda adds special kernel command-line options to the
bootloader configuration telling the initrd which RAID set(s) to activate before searching for the root
file system.
For instructions on configuring RAID during installation, see the Red Hat Enterprise Linux 7 Installation
Guide.
1. Copy the contents of the PowerPC Reference Platform (PReP) boot partition from /dev/sda1
to /dev/sdb1:
# dd if=/dev/sda1 of=/dev/sdb1
2. Update the Prep and boot flag on the first partition on both disks:
NOTE
Some hardware RAID controllers allow you to configure RAID sets on-the-fly or even define completely
new sets after adding extra disks. This requires the use of driver-specific utilities, as there is no standard
API for this. For more information, see your hardware RAID controller's driver documentation for
information on this.
mdadm
160
CHAPTER 18. REDUNDANT ARRAY OF INDEPENDENT DISKS (RAID)
The mdadm command-line tool is used to manage software RAID in Linux, i.e. mdraid. For information
on the different mdadm modes and options, see man mdadm. The man page also contains useful
examples for common operations like creating, monitoring, and assembling software RAID arrays.
dmraid
As the name suggests, dmraid is used to manage device-mapper RAID sets. The dmraid tool finds
ATARAID devices using multiple metadata format handlers, each supporting various formats. For a
complete list of supported formats, run dmraid -l.
As mentioned earlier in Section 18.3, “Linux RAID Subsystems”, the dmraid tool cannot configure RAID
sets after creation. For more information about using dmraid, see man dmraid.
2. During the initial boot up, select Rescue Mode instead of Install or Upgrade. When the system
fully boots into Rescue mode, the user will be presented with a command line terminal.
3. From this terminal, use parted to create RAID partitions on the target hard drives. Then, use
mdadm to manually create raid arrays from those partitions using any and all settings and options
available. For more information on how to do these, see Chapter 13, Partitions, man parted,
and man mdadm.
4. Once the arrays are created, you can optionally create file systems on the arrays as well.
5. Reboot the computer and this time select Install or Upgrade to install as normal. As Anaconda
searches the disks in the system, it will find the pre-existing RAID devices.
6. When asked about how to use the disks in the system, select Custom Layout and click Next. In
the device listing, the pre-existing MD RAID devices will be listed.
7. Select a RAID device, click Edit and configure its mount point and (optionally) the type of file
system it should use (if you did not create one earlier) then click Done. Anaconda will perform
the install to this pre-existing RAID device, preserving the custom options you selected when you
created it in Rescue Mode.
NOTE
The limited Rescue Mode of the installer does not include man pages. Both the man
mdadm and man md contain useful information for creating custom RAID arrays, and may
be needed throughout the workaround. As such, it can be helpful to either have access to
a machine with these man pages present, or to print them out prior to booting into Rescue
Mode and creating your custom arrays.
161
Storage Administration Guide
[2] A hot-swap chassis allows you to remove a hard drive without having to power-down your system.
[3] RAID level 1 comes at a high cost because you write the same information to all of the disks in the array,
provides data reliability, but in a much less space-efficient manner than parity based RAID levels such as level 5.
However, this space inefficiency comes with a performance benefit: parity-based RAID levels consume
considerably more CPU power in order to generate the parity while RAID level 1 simply writes the same data more
than once to the multiple RAID members with very little CPU overhead. As such, RAID level 1 can outperform the
parity-based RAID levels on machines where software RAID is employed and CPU resources on the machine are
consistently taxed with operations other than RAID activities.
[4] Parity information is calculated based on the contents of the rest of the member disks in the array. This
information can then be used to reconstruct data when one disk in the array fails. The reconstructed data can then
be used to satisfy I/O requests to the failed disk before it is replaced and to repopulate the failed disk after it has
been replaced.
162
CHAPTER 19. USING THE MOUNT COMMAND
$ mount
This command displays the list of known mount points. Each line provides important information about
the device name, the file system type, the directory in which it is mounted, and relevant mount options in
the following form:
The findmnt utility, which allows users to list mounted file systems in a tree-like form, is also available
from Red Hat Enterprise Linux 6.1. To display all currently attached file systems, run the findmnt
command with no additional arguments:
$ findmnt
$ mount -t type
Similarly, to display only the devices with a certain file system using the findmnt command:
$ findmnt -t type
For a list of common file system types, see Table 19.1, “Common File System Types”. For an example
usage, see Example 19.1, “Listing Currently Mounted ext4 File Systems”.
Usually, both / and /boot partitions are formatted to use ext4. To display only the mount points that
use this file system, use the following command:
$ mount -t ext4
/dev/sda2 on / type ext4 (rw)
/dev/sda1 on /boot type ext4 (rw)
163
Storage Administration Guide
$ findmnt -t ext4
TARGET SOURCE FSTYPE OPTIONS
/ /dev/sda2 ext4 rw,realtime,seclabel,barrier=1,data=ordered
/boot /dev/sda1 ext4 rw,realtime,seclabel,barrier=1,data=ordered
Note that while a file system is mounted, the original content of the directory is not accessible.
IMPORTANT
Linux does not prevent a user from mounting a file system to a directory with a file system
already attached to it. To determine whether a particular directory serves as a mount
point, run the findmnt utility with the directory as its argument and verify the exit code:
When you run the mount command without all required information, that is without the device name, the
target directory, or the file system type, the mount reads the contents of the /etc/fstab file to check if
the given file system is listed. The /etc/fstab file contains a list of device names and the directories in
which the selected file systems are set to be mounted as well as the file system type and mount options.
Therefore, when mounting a file system that is specified in /etc/fstab, you can choose one of the
following options:
Note that permissions are required to mount the file systems unless the command is run as root (see
Section 19.2.2, “Specifying the Mount Options”).
164
CHAPTER 19. USING THE MOUNT COMMAND
NOTE
To determine the UUID and—if the device uses it—the label of a particular device, use the
blkid command in the following form:
blkid device
# blkid /dev/sda3
/dev/sda3: LABEL="home" UUID="34795a28-ca6d-4fd8-a347-
73671d0c19cb" TYPE="ext3"
Table 19.1, “Common File System Types” provides a list of common file system types that can be used
with the mount command. For a complete list of all available file system types, see the section called
“Manual Page Documentation”.
Type Description
iso9660 The ISO 9660 file system. It is commonly used by optical media, typically CDs.
nfs The NFS file system. It is commonly used to access files over the network.
nfs4 The NFSv4 file system. It is commonly used to access files over the network.
165
Storage Administration Guide
Type Description
ntfs The NTFS file system. It is commonly used on machines that are running the Windows
operating system.
udf The UDF file system. It is commonly used by optical media, typically DVDs.
vfat The FAT file system. It is commonly used on machines that are running the Windows
operating system, and on certain digital media such as USB flash drives or floppy disks.
See Example 19.2, “Mounting a USB Flash Drive” for an example usage.
Older USB flash drives often use the FAT file system. Assuming that such drive uses the /dev/sdc1
device and that the /media/flashdisk/ directory exists, mount it to this directory by typing the
following at a shell prompt as root:
When supplying multiple options, do not insert a space after a comma, or mount interprets incorrectly
the values following spaces as additional parameters.
Table 19.2, “Common Mount Options” provides a list of common mount options. For a complete list of all
available options, consult the relevant manual page as referred to in the section called “Manual Page
Documentation”.
Option Description
auto Allows the file system to be mounted automatically using the mount -a command.
exec Allows the execution of binary files on the particular file system.
166
CHAPTER 19. USING THE MOUNT COMMAND
Option Description
noauto Default behavior disallows the automatic mount of the file system using the mount -a
command.
noexec Disallows the execution of binary files on the particular file system.
nouser Disallows an ordinary user (that is, other than root) to mount and unmount the file
system.
user Allows an ordinary user (that is, other than root) to mount and unmount the file
system.
An ISO image (or a disk image in general) can be mounted by using the loop device. Assuming that
the ISO image of the Fedora 14 installation disc is present in the current working directory and that
the /media/cdrom/ directory exists, mount the image to this directory by running the following
command:
Although this command allows a user to access the file system from both places, it does not apply on the
file systems that are mounted within the original directory. To include these mounts as well, use the
following command:
167
Storage Administration Guide
Additionally, to provide as much flexibility as possible, Red Hat Enterprise Linux 7 implements the
functionality known as shared subtrees. This feature allows the use of the following four mount types:
Shared Mount
A shared mount allows the creation of an exact replica of a given mount point. When a mount point is
marked as a shared mount, any mount within the original mount point is reflected in it, and vice
versa. To change the type of a mount point to a shared mount, type the following at a shell prompt:
Alternatively, to change the mount type for the selected mount point and all mount points under it:
See Example 19.4, “Creating a Shared Mount Point” for an example usage.
There are two places where other file systems are commonly mounted: the /media/ directory for
removable media, and the /mnt/ directory for temporarily mounted file systems. By using a
shared mount, you can make these two directories share the same content. To do so, as root,
mark the /media/ directory as shared:
It is now possible to verify that a mount within /media/ also appears in /mnt/. For example, if
the CD-ROM drive contains non-empty media and the /media/cdrom/ directory exists, run the
following commands:
Similarly, it is possible to verify that any file system mounted in the /mnt/ directory is reflected in
/media/. For instance, if a non-empty USB flash drive that uses the /dev/sdc1 device is
plugged in and the /mnt/flashdisk/ directory is present, type:
Slave Mount
168
CHAPTER 19. USING THE MOUNT COMMAND
A slave mount allows the creation of a limited duplicate of a given mount point. When a mount point
is marked as a slave mount, any mount within the original mount point is reflected in it, but no mount
within a slave mount is reflected in its original. To change the type of a mount point to a slave mount,
type the following at a shell prompt:
Alternatively, it is possible to change the mount type for the selected mount point and all mount points
under it by typing:
See Example 19.5, “Creating a Slave Mount Point” for an example usage.
This example shows how to get the content of the /media/ directory to appear in /mnt/ as well,
but without any mounts in the /mnt/ directory to be reflected in /media/. As root, first mark the
/media/ directory as shared:
Now verify that a mount within /media/ also appears in /mnt/. For example, if the CD-ROM
drive contains non-empty media and the /media/cdrom/ directory exists, run the following
commands:
Also verify that file systems mounted in the /mnt/ directory are not reflected in /media/. For
instance, if a non-empty USB flash drive that uses the /dev/sdc1 device is plugged in and the
/mnt/flashdisk/ directory is present, type:
Private Mount
A private mount is the default type of mount, and unlike a shared or slave mount, it does not receive
or forward any propagation events. To explicitly mark a mount point as a private mount, type the
following at a shell prompt:
169
Storage Administration Guide
Alternatively, it is possible to change the mount type for the selected mount point and all mount points
under it:
See Example 19.6, “Creating a Private Mount Point” for an example usage.
Taking into account the scenario in Example 19.4, “Creating a Shared Mount Point”, assume that
a shared mount point has been previously created by using the following commands as root:
It is now possible to verify that none of the mounts within /media/ appears in /mnt/. For
example, if the CD-ROM drives contains non-empty media and the /media/cdrom/ directory
exists, run the following commands:
It is also possible to verify that file systems mounted in the /mnt/ directory are not reflected in
/media/. For instance, if a non-empty USB flash drive that uses the /dev/sdc1 device is
plugged in and the /mnt/flashdisk/ directory is present, type:
Unbindable Mount
In order to prevent a given mount point from being duplicated whatsoever, an unbindable mount is
used. To change the type of a mount point to an unbindable mount, type the following at a shell
prompt:
Alternatively, it is possible to change the mount type for the selected mount point and all mount points
under it:
170
CHAPTER 19. USING THE MOUNT COMMAND
See Example 19.7, “Creating an Unbindable Mount Point” for an example usage.
This way, any subsequent attempt to make a duplicate of this mount fails with an error:
See Example 19.8, “Moving an Existing NFS Mount Point” for an example usage.
An NFS storage contains user directories and is already mounted in /mnt/userdirs/. As root,
move this mount point to /home by using the following command:
To verify the mount point has been moved, list the content of both directories:
# ls /mnt/userdirs
# ls /home
jill joe
Sometimes, you need to mount the root file system with read-only permissions. Example use cases
include enhancing security or ensuring data integrity after an unexpected system power-off.
171
Storage Administration Guide
GRUB_CMDLINE_LINUX="crashkernel=auto rd.lvm.lv=rhel/root
rd.lvm.lv=rhel/swap rhgb quiet ro"
# grub2-mkconfig -o /boot/grub2/grub.cfg
5. If you need to add files and directories to be mounted with write permissions in the tmpfs file
system, create a text file in the /etc/rwtab.d/ directory and put the configuration there. For
example, to mount /etc/example/file with write permissions, add this line to the
/etc/rwtab.d/example file:
files /etc/example/file
IMPORTANT
Changes made to files and directories in tmpfs do not persist across boots.
See Section 19.2.5.3, “Files and Directories That Retain Write Permissions” for more information
on this step.
If root (/) was mounted with read-only permissions on system boot, you can remount it with write
permissions:
# mount -o remount,rw /
This can be particularly useful when / is incorrectly mounted with read-only permissions.
# mount -o remount,ro /
172
CHAPTER 19. USING THE MOUNT COMMAND
NOTE
This command mounts the whole / with read-only permissions. A better approach is to
retain write permissions for certain files and directories by copying them into RAM, as
described in Section 19.2.5.1, “Configuring root to Mount with Read-only Permissions on
Boot”.
For the system to function properly, some files and directories need to retain write permissions. With root
in read-only mode, they are mounted in RAM in the tmpfs temporary file system. The default set of
such files and directories is read from the /etc/rwtab file, which contains:
dirs /var/cache/man
dirs /var/gdm
[output truncated]
empty /tmp
empty /var/cache/foomatic
[output truncated]
files /etc/adjtime
files /etc/ntp.conf
[output truncated]
dirs path: A directory tree is copied to tmpfs, empty. Example: dirs /var/run
files path: A file or a directory tree is copied to tmpfs intact. Example: files
/etc/resolv.conf
$ umount directory
$ umount device
Note that unless this is performed while logged in as root, the correct permissions must be available to
unmount the file system. For more information, see Section 19.2.2, “Specifying the Mount Options”. See
Example 19.9, “Unmounting a CD” for an example usage.
173
Storage Administration Guide
IMPORTANT
When a file system is in use (for example, when a process is reading a file on this file
system, or when it is used by the kernel), running the umount command fails with an
error. To determine which processes are accessing the file system, use the fuser
command in the following form:
$ fuser -m directory
For example, to list the processes that are accessing a file system mounted to the
/media/cdrom/ directory:
$ fuser -m /media/cdrom
/media/cdrom: 1793 2013 2022 2435 10532c 10672c
To unmount a CD that was previously mounted to the /media/cdrom/ directory, use the following
command:
$ umount /media/cdrom
man 8 umount: The manual page for the umount command that provides a full documentation
on its usage.
man 8 findmnt: The manual page for the findmnt command that provides a full
documentation on its usage.
man 5 fstab: The manual page providing a thorough description of the /etc/fstab file
format.
Useful Websites
Shared subtrees — An LWN article covering the concept of shared subtrees.
174
CHAPTER 20. THE VOLUME_KEY FUNCTION
This is useful for when the primary user forgets their keys and passwords, after an employee leaves
abruptly, or in order to extract data after a hardware or software failure corrupts the header of the
encrypted volume. In a corporate setting, the IT help desk can use volume_key to back up the
encryption keys before handing over the computer to the end user.
NOTE
volume_key is not included in a standard install of Red Hat Enterprise Linux 7 server.
For information on installing it, refer to
http://fedoraproject.org/wiki/Disk_encryption_key_escrow_use_cases.
The operands and mode of operation for volume_key are determined by specifying one of the following
options:
--save
This command expects the operand volume [packet]. If a packet is provided then volume_key will
extract the keys and passphrases from it. If packet is not provided, then volume_key will extract the
keys and passphrases from the volume, prompting the user where necessary. These keys and
passphrases will then be stored in one or more output packets.
--restore
This command expects the operands volume packet. It then opens the volume and uses the keys and
passphrases in the packet to make the volume accessible again, prompting the user where
necessary, such as allowing the user to enter a new passphrase, for example.
--setup-volume
This command expects the operands volume packet name. It then opens the volume and uses the
keys and passphrases in the packet to set up the volume for use of the decrypted data as name.
Name is the name of a dm-crypt volume. This operation makes the decrypted volume available as
/dev/mapper/name.
This operation does not permanently alter the volume by adding a new passphrase, for example. The
user can access and modify the decrypted volume, modifying volume in the process.
175
Storage Administration Guide
These three commands perform similar functions with varying output methods. They each require the
operand packet, and each opens the packet, decrypting it where necessary. --reencrypt then
stores the information in one or more new output packets. --secrets outputs the keys and
passphrases contained in the packet. --dump outputs the content of the packet, though the keys and
passphrases are not output by default. This can be changed by appending --with-secrets to the
command. It is also possible to only dump the unencrypted parts of the packet, if any, by using the --
unencrypted command. This does not require any passphrase or private key access.
--output-format format
This command uses the specified format for all output packets. Currently, format can be one of the
following:
asymmetric: uses CMS to encrypt the whole packet, and requires a certificate
passphrase: uses GPG to encrypt the whole packet, and requires a passphrase
--create-random-passphrase packet
This command generates a random alphanumeric passphrase, adds it to the volume (without
affecting other passphrases), and then stores this random passphrase into the packet.
NOTE
For all examples in this file, /path/to/volume is a LUKS device, not the plaintext
device contained within. blkid -s type /path/to/volume should report
type="crypto_LUKS".
1. Run:
A prompt will then appear requiring an escrow packet passphrase to protect the key.
2. Save the generated escrow-packet file, ensuring that the passphrase is not forgotten.
176
CHAPTER 20. THE VOLUME_KEY FUNCTION
If the volume passphrase is forgotten, use the saved escrow packet to restore access to the data.
1. Boot the system in an environment where volume_key can be run and the escrow packet is
available (a rescue mode, for example).
2. Run:
A prompt will appear for the escrow packet passphrase that was used when creating the escrow
packet, and for the new passphrase for the volume.
To free up the passphrase slot in the LUKS header of the encrypted volume, remove the old, forgotten
passphrase by using the command cryptsetup luksKillSlot.
This section will cover the procedures required for preparation before saving encryption keys, how to
save encryption keys, restoring access to a volume, and setting up emergency passphrases.
2. Designate trusted users who are trusted not to compromise the private key. These users will be
able to decrypt the escrow packets.
3. Choose which systems will be used to decrypt the escrow packets. On these systems, set up an
NSS database that contains the private key.
If the private key was not created in an NSS database, follow these steps:
Run:
certutil -d /the/nss/directory -N
At this point it is possible to choose an NSS database password. Each NSS database can
have a different password so the designated users do not need to share a single password if
a separate NSS database is used by each user.
177
Storage Administration Guide
Run:
4. Distribute the certificate to anyone installing systems or saving keys on existing systems.
5. For saved private keys, prepare storage that allows them to be looked up by machine and
volume. For example, this can be a simple directory with one subdirectory per machine, or a
database used for other system management tasks as well.
NOTE
For all examples in this file, /path/to/volume is a LUKS device, not the plaintext
device contained within; blkid -s type /path/to/volume should report
type="crypto_LUKS".
1. Run:
2. Save the generated escrow-packet file in the prepared storage, associating it with the system
and the volume.
1. Get the escrow packet for the volume from the packet storage and send it to one of the
designated users for decryption.
After providing the NSS database password, the designated user chooses a passphrase for
encrypting escrow-packet-out. This passphrase can be different every time and only
protects the encryption keys while they are moved from the designated user to the target
system.
3. Obtain the escrow-packet-out file and the passphrase from the designated user.
178
CHAPTER 20. THE VOLUME_KEY FUNCTION
4. Boot the target system in an environment that can run volume_key and have the escrow-
packet-out file available, such as in a rescue mode.
5. Run:
A prompt will appear for the packet passphrase chosen by the designated user, and for a new
passphrase for the volume.
It is possible to remove the old passphrase that was forgotten by using cryptsetup luksKillSlot,
for example, to free up the passphrase slot in the LUKS header of the encrypted volume. This is done
with the command cryptsetup luksKillSlot device key-slot. For more information and
examples see cryptsetup --help.
This generates a random passphrase, adds it to the specified volume, and stores it to passphrase-
packet. It is also possible to combine the --create-random-passphrase and -o options to
generate both packets at the same time.
This shows the random passphrase. Give this passphrase to the end user.
online at http://fedoraproject.org/wiki/Disk_encryption_key_escrow_use_cases
179
Storage Administration Guide
Performance degrades as the number of used blocks approaches the disk capacity. The degree of
performance impact varies greatly by vendor. However, all devices experience some degradation.
To address the degradation issue, the host system (for example, the Linux kernel) may use discard
requests to inform the storage that a given range of blocks is no longer in use. An SSD can use this
information to free up space internally, using the free blocks for wear-leveling. Discards will only be
issued if the storage advertises support in terms of its storage protocol (be it ATA or SCSI). Discard
requests are issued to the storage using the negotiated discard command specific to the storage protocol
(TRIM command for ATA, and WRITE SAME with UNMAP set, or UNMAP command for SCSI).
Enabling discard support is most useful when the following points are true:
Most logical blocks on the underlying storage device have already been written to.
For more information about TRIM, see Data Set Management T13 Specifications.
For more information about UNMAP, see the section 4.7.3.4 of the SCSI Block Commands 3 T10
Specification.
NOTE
Not all solid-state devices in the market have discard support. To determine if your
solid-state device has discard support, check for
/sys/block/sda/queue/discard_granularity, which is the size of internal
allocation unit of device.
Deployment Considerations
Because of the internal layout and operation of SSDs, it is best to partition devices on an internal erase
block boundary. Partitioning utilities in Red Hat Enterprise Linux 7 chooses sane defaults if the SSD
exports topology information. However, if the device does not export topology information, Red Hat
recommends that the first partition should be created at a 1MB boundary.
SSD has various types of TRIM mechanism depending on the vendors choice. The early versions of
disks improved the performance by compromising possible data leakage after the read command.
Non-deterministic TRIM
The first two types of TRIM mechanism can cause data leakage as the read command to the LBA after a
TRIM returns different or same data. RZAT returns zero after the read command and Red Hat
180
CHAPTER 21. SOLID-STATE DISK DEPLOYMENT GUIDELINES
recommends this TRIM mechanism to avoid data leakage. It is affected only in SSD. Choose the disk
which supports RZAT mechanism.
Type of TRIM mechanism used depends on hardware implementation. To find the type of TRIM
mechanism on ATA, use the hdparm command. See the following example to find the type of TRIM
mechanism:
The Logical Volume Manager (LVM), the device-mapper (DM) targets, and MD (software raid) targets
that LVM uses support discards. The only DM targets that do not support discards are dm-snapshot, dm-
crypt, and dm-raid45. Discard support for the dm-mirror was added in Red Hat Enterprise Linux 6.1 and
as of 7.0 MD supports discards.
Using RAID level 5 over SSD results in low performance if SSDs do not handle discard correctly. You
can set discard in the raid456.conf file, or in the GRUB2 configuration. For instructions, see the
following procedures.
# cat /sys/block/disk-name/queue/discard_zeroes_data
If the returned value is 1, discards are supported. If the command returns 0, the RAID code has
to zero the disk out, which takes more time.
# cat /sys/block/disk-name/queue/discard_zeroes_data
If the returned value is 1, discards are supported. If the command returns 0, the RAID code has
to zero the disk out, which takes more time.
181
Storage Administration Guide
raid456.devices_handle_discard_safely=Y
3. The location of the GRUB2 configuration file is different on systems with the BIOS firmware and
on systems with UEFI. Use one of the following commands to recreate the GRUB2 configuration
file.
# grub2-mkconfig -o /boot/grub2/grub.cfg
# grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg
NOTE
In Red Hat Enterprise Linux 7, discard is fully supported by the ext4 and XFS file systems
only.
In Red Hat Enterprise Linux 6.3 and earlier, only the ext4 file system fully supports discard. Starting with
Red Hat Enterprise Linux 6.4, both ext4 and XFS file systems fully support discard. To enable discard
commands on a device, use the discard option of the mount command. For example, to mount
/dev/sda2 to /mnt with discard enabled, use:
By default, ext4 does not issue the discard command to, primarily, avoid problems on devices which
might not properly implement discard. The Linux swap code issues discard commands to discard-
enabled devices, and there is no option to control this behavior.
182
CHAPTER 22. WRITE BARRIERS
Enabling write barriers incurs a substantial performance penalty for some applications. Specifically,
applications that use fsync() heavily or create and delete many small files will likely run much slower.
1. The file system sends the body of the transaction to the storage device.
3. If the transaction and its corresponding commit block are written to disk, the file system assumes
that the transaction will survive any power failure.
However, file system integrity during power failure becomes more complex for storage devices with extra
caches. Storage target devices like local S-ATA or SAS drives may have write caches ranging from
32MB to 64MB in size (with modern drives). Hardware RAID controllers often contain internal write
caches. Further, high end arrays, like those from NetApp, IBM, Hitachi and EMC (among others), also
have large caches.
Storage devices with write caches report I/O as "complete" when the data is in cache; if the cache loses
power, it loses its data as well. Worse, as the cache de-stages to persistent storage, it may change the
original metadata ordering. When this occurs, the commit block may be present on disk without having
the complete, associated transaction in place. As a result, the journal may replay these uninitialized
transaction blocks into the file system during post-power-loss recovery; this will cause data inconsistency
and corruption.
With barriers enabled, an fsync() call also issues a storage cache flush. This guarantees that file data
is persistent on disk even if power loss occurs shortly after fsync() returns.
183
Storage Administration Guide
NOTE
Write caches are designed to increase I/O performance. However, enabling write barriers
means constantly flushing these caches, which can significantly reduce performance.
For devices with non-volatile, battery-backed write caches and those with write-caching disabled, you
can safely disable write barriers at mount time using the -o nobarrier option for mount. However,
some devices do not support write barriers; such devices log an error message to
/var/log/messages. For more information, see Table 22.1, “Write Barrier Error Messages per File
System”.
Most controllers use vendor-specific tools to query and manipulate target drives. For example, the LSI
Megaraid SAS controller uses a battery-backed write cache; this type of controller requires the
MegaCli64 tool to manage target drives. To show the state of all back-end drives for LSI Megaraid
SAS, use:
184
CHAPTER 22. WRITE BARRIERS
To disable the write cache of all back-end drives for LSI Megaraid SAS, use:
NOTE
Hardware RAID cards recharge their batteries while the system is operational. If a system
is powered off for an extended period of time, the batteries will lose their charge, leaving
stored data vulnerable during a power failure.
High-End Arrays
High-end arrays have various ways of protecting data in the event of a power failure. As such, there is no
need to verify the state of the internal drives in external RAID storage.
NFS
NFS clients do not need to enable write barriers, since data integrity is handled by the NFS server side.
As such, NFS servers should be configured to ensure data persistence throughout a power loss (whether
through write barriers or other means).
185
Storage Administration Guide
The Linux I/O stack has been enhanced to process vendor-provided I/O alignment and I/O size
information, allowing storage management tools (parted, lvm, mkfs.*, and the like) to optimize data
placement and access. If a legacy device does not export I/O alignment and size data, then storage
management tools in Red Hat Enterprise Linux 7 will conservatively align I/O on a 4k (or larger power of
2) boundary. This will ensure that 4k-sector devices operate correctly even if they do not indicate any
required/preferred I/O alignment and size.
For information on determining the information that the operating system obtained from the device, see
the Section 23.2, “Userspace Access”. This data is subsequently used by the storage management tools
to determine data placement.
The IO scheduler has changed for Red Hat Enterprise Linux 7. Default IO Scheduler is now Deadline,
except for SATA drives. CFQ is the default IO scheduler for SATA drives. For faster storage, Deadline
outperforms CFQ and when it is used there is a performance increase without the need of special tuning.
If default is not right for some disks (for example, SAS rotational disks), then change the IO scheduler to
CFQ. This instance will depend on the workload.
physical_block_size
Smallest internal unit on which the device can operate
logical_block_size
Used externally to address a location on the device
alignment_offset
The number of bytes that the beginning of the Linux block device (partition/MD/LVM device) is offset
from the underlying physical alignment
minimum_io_size
The device’s preferred minimum unit for random I/O
optimal_io_size
The device’s preferred unit for streaming I/O
For example, certain 4K sector devices may use a 4K physical_block_size internally but expose a
more granular 512-byte logical_block_size to Linux. This discrepancy introduces potential for
misaligned I/O. To address this, the Red Hat Enterprise Linux 7 I/O stack will attempt to start all data
areas on a naturally-aligned boundary (physical_block_size) by making sure it accounts for any
alignment_offset if the beginning of the block device is offset from the underlying physical alignment.
Storage vendors can also supply I/O hints about the preferred minimum unit for random I/O
186
CHAPTER 23. STORAGE I/O ALIGNMENT AND SIZE
With native 4K devices (i.e. logical_block_size is 4K) it is now critical that applications perform
direct I/O in multiples of the device's logical_block_size. This means that applications will fail with
native 4k devices that perform 512-byte aligned I/O rather than 4k-aligned I/O.
To avoid this, an application should consult the I/O parameters of a device to ensure it is using the
proper I/O alignment and size. As mentioned earlier, I/O parameters are exposed through the both
sysfs and block device ioctl interfaces.
For more information, see man libblkid. This man page is provided by the libblkid-devel
package.
sysfs Interface
/sys/block/disk/alignment_offset
or
/sys/block/disk/partition/alignment_offset
NOTE
The file location depends on whether the disk is a physical disk (be that a local
disk, local RAID, or a multipath LUN) or a virtual disk. The first file location is
applicable to physical disks while the second file location is applicable to virtual
disks. The reason for this is because virtio-blk will always report an alignment
value for the partition. Physical disks may or may not report an alignment value.
/sys/block/disk/queue/physical_block_size
/sys/block/disk/queue/logical_block_size
/sys/block/disk/queue/minimum_io_size
/sys/block/disk/queue/optimal_io_size
The kernel will still export these sysfs attributes for "legacy" devices that do not provide I/O parameters
information, for example:
alignment_offset: 0
physical_block_size: 512
logical_block_size: 512
187
Storage Administration Guide
minimum_io_size: 512
optimal_io_size: 0
BLKPBSZGET: physical_block_size
BLKSSZGET: logical_block_size
BLKIOMIN: minimum_io_size
BLKIOOPT: optimal_io_size
ATA
ATA devices must report appropriate information via the IDENTIFY DEVICE command. ATA devices
only report I/O parameters for physical_block_size, logical_block_size, and
alignment_offset. The additional I/O hints are outside the scope of the ATA Command Set.
SCSI
I/O parameters support in Red Hat Enterprise Linux 7 requires at least version 3 of the SCSI Primary
Commands (SPC-3) protocol. The kernel will only send an extended inquiry (which gains access to the
BLOCK LIMITS VPD page) and READ CAPACITY(16) command to devices which claim compliance
with SPC-3.
The READ CAPACITY(16) command provides the block sizes and alignment offset:
/sys/block/disk/alignment_offset
/sys/block/disk/partition/alignment_offset
The BLOCK LIMITS VPD page (0xb0) provides the I/O hints. It also uses OPTIMAL TRANSFER
LENGTH GRANULARITY and OPTIMAL TRANSFER LENGTH to derive:
/sys/block/disk/queue/minimum_io_size
/sys/block/disk/queue/optimal_io_size
188
CHAPTER 23. STORAGE I/O ALIGNMENT AND SIZE
The sg3_utils package provides the sg_inq utility, which can be used to access the BLOCK LIMITS
VPD page. To do so, run:
Only one layer in the I/O stack should adjust for a non-zero alignment_offset; once a layer
adjusts accordingly, it will export a device with an alignment_offset of zero.
A striped Device Mapper (DM) device created with LVM must export a minimum_io_size and
optimal_io_size relative to the stripe count (number of disks) and user-provided chunk size.
In Red Hat Enterprise Linux 7, Device Mapper and Software Raid (MD) device drivers can be used to
arbitrarily combine devices with different I/O parameters. The kernel's block layer will attempt to
reasonably combine the I/O parameters of the individual devices. The kernel will not prevent combining
heterogeneous devices; however, be aware of the risks associated with doing so.
For instance, a 512-byte device and a 4K device may be combined into a single logical DM device, which
would have a logical_block_size of 4K. File systems layered on such a hybrid device assume that
4K will be written atomically, but in reality it will span 8 logical block addresses when issued to the 512-
byte device. Using a 4K logical_block_size for the higher-level DM device increases potential for a
partial write to the 512-byte device if there is a system crash.
If combining the I/O parameters of multiple devices results in a conflict, the block layer may issue a
warning that the device is susceptible to partial writes and/or is misaligned.
By default, LVM will adjust for any alignment_offset, but this behavior can be disabled by setting
data_alignment_offset_detection to 0 in /etc/lvm/lvm.conf. Disabling this is not
recommended.
LVM will also detect the I/O hints for a device. The start of a device's data area will be a multiple of the
minimum_io_size or optimal_io_size exposed in sysfs. LVM will use the minimum_io_size if
optimal_io_size is undefined (i.e. 0).
By default, LVM will automatically determine these I/O hints, but this behavior can be disabled by setting
data_alignment_detection to 0 in /etc/lvm/lvm.conf. Disabling this is not recommended.
189
Storage Administration Guide
This section describes how different partition and file system management tools interact with a device's
I/O parameters.
Always use the reported alignment_offset as the offset for the start of the first primary
partition.
This is the catch-all for "legacy" devices which don't appear to provide I/O hints. As such, by
default all partitions will be aligned on a 1MB boundary.
NOTE
Red Hat Enterprise Linux 7 cannot distinguish between devices that don't provide
I/O hints and those that do so with alignment_offset=0 and
optimal_io_size=0. Such a device might be a single SAS 4K device; as such,
at worst 1MB of space is lost at the start of the disk.
Except for mkfs.gfs2, all other mkfs.filesystem utilities also use the I/O hints to layout on-disk data
structure and data areas relative to the minimum_io_size and optimal_io_size of the underlying
storage device. This allows file systems to be optimally formatted for various RAID (striped) layouts.
190
CHAPTER 24. SETTING UP A REMOTE DISKLESS SYSTEM
tftp-server
xinetd
dhcp
syslinux
dracut-network
NOTE
add_dracutmodules+="nfs"
Remote diskless system booting requires both a tftp service (provided by tftp-server) and a DHCP
service (provided by dhcp). The tftp service is used to retrieve kernel image and initrd over the
network via the PXE loader.
NOTE
SELinux is only supported over NFSv4.2. To use SELinux, NFS must be explicitly
enabled in /etc/sysconfig/nfs by adding the line:
RPCNFSDARGS="-V 4.2"
The following sections outline the necessary procedures for deploying remote diskless systems in a
network environment.
IMPORTANT
Some RPM packages have started using file capabilities (such as setcap and getcap).
However, NFS does not currently support these so attempting to install or update any
packages that use file capabilities will fail.
191
Storage Administration Guide
Procedure
To configure tftp, perform the following steps:
# cp /usr/share/syslinux/pxelinux.0 /var/lib/tftpboot/
# mkdir -p /var/lib/tftpboot/pxelinux.cfg/
As tftp supports TCP wrappers, you can configure host access to tftp in the
/etc/hosts.allow configuration file. For more information on configuring TCP wrappers and
the /etc/hosts.allow configuration file, see the Red Hat Enterprise Linux 7 Security Guide.
The hosts_access(5) also provides information about /etc/hosts.allow.
Next Steps
After configuring tftp for diskless clients, configure DHCP, NFS, and the exported file system
accordingly. For instructions on configuring the DHCP, NFS, and the exported file system, see
Section 24.2, “Configuring DHCP for Diskless Clients” and Section 24.3, “Configuring an Exported File
System for Diskless Clients”.
Configure the tftp service. See Section 24.1, “Configuring a tftp Service for Diskless Clients”.
Procedure
1. After configuring a tftp server, you need to set up a DHCP service on the same host machine.
For instructions on setting up a DHCP server, see the Configuring a DHCP Server.
2. Enable PXE booting on the DHCP server by adding the following configuration to
/etc/dhcp/dhcp.conf:
allow booting;
allow bootp;
class "pxeclients" {
match if substring(option vendor-class-identifier, 0, 9) =
"PXEClient";
192
CHAPTER 24. SETTING UP A REMOTE DISKLESS SYSTEM
next-server server-ip;
filename "pxelinux.0";
}
Replace server-ip with the IP address of the host machine on which the tftp and DHCP
services reside.
NOTE
When libvirt virtual machines are used as the diskless client, libvirt
provides the DHCP service and the stand alone DHCP server is not used. In this
situation, network booting must be enabled with the bootp file='filename'
option in the libvirt network configuration, virsh net-edit.
Next Steps
Now that tftp and DHCP are configured, configure NFS and the exported file system. For instructions,
see the Section 24.3, “Configuring an Exported File System for Diskless Clients”.
Configure the tftp service. See Section 24.1, “Configuring a tftp Service for Diskless Clients”.
Configure DHCP. See Section 24.2, “Configuring DHCP for Diskless Clients”.
Procedure
1. The root directory of the exported file system (used by diskless clients in the network) is shared
via NFS. Configure the NFS service to export the root directory by adding it to /etc/exports.
For instructions on how to do so, see the Section 8.7.1, “The /etc/exports Configuration
File”.
2. To accommodate completely diskless clients, the root directory should contain a complete
Red Hat Enterprise Linux installation. You can either clone an existing installation or install a
new base system:
Replace hostname.com with the hostname of the running system with which to
synchronize via rsync.
To install Red Hat Enterprise Linux to the exported location, use the yum utility with the --
installroot option:
193
Storage Administration Guide
The file system to be exported still needs to be configured further before it can be used by diskless
clients. To do this, perform the following procedure:
1. Select the kernel that diskless clients should use (vmlinuz-kernel-version) and copy it to
the tftp boot directory:
# cp /boot/vmlinuz-kernel-version /var/lib/tftpboot/
3. Change the initrd's file permissions to 644 using the following command:
WARNING
If the initrd's file permissions are not changed, the pxelinux.0 boot loader
will fail with a "file not found" error.
4. Copy the resulting initramfs-kernel-version.img into the tftp boot directory as well.
5. Edit the default boot configuration to use the initrd and kernel in the /var/lib/tftpboot/
directory. This configuration should instruct the diskless client's root to mount the exported file
system (/exported/root/directory) as read-write. Add the following configuration in the
/var/lib/tftpboot/pxelinux.cfg/default file:
default rhel7
label rhel7
kernel vmlinuz-kernel-version
append initrd=initramfs-kernel-version.img root=nfs:server-
ip:/exported/root/directory rw
Replace server-ip with the IP address of the host machine on which the tftp and DHCP
services reside.
The NFS share is now ready for exporting to diskless clients. These clients can boot over the network via
PXE.
194
CHAPTER 25. ONLINE STORAGE MANAGEMENT
This chapter focuses on adding, removing, modifying, and monitoring storage devices. It does not
discuss the Fibre Channel or iSCSI protocols in detail. For more information about these protocols, refer
to other documentation.
This chapter makes reference to various sysfs objects. Red Hat advises that the sysfs object names
and directory structure are subject to change in major Red Hat Enterprise Linux releases. This is
because the upstream Linux kernel does not provide a stable internal API. For guidelines on how to
reference sysfs objects in a transportable way, refer to the document /usr/share/doc/kernel-
doc-version/Documentation/sysfs-rules.txt in the kernel source tree for guidelines.
WARNING
In addition, Red Hat recommends that you back up all data before reconfiguring
online storage.
The hierarchy of targetcli does not always match the kernel interface exactly because targetcli is
simplified where possible.
IMPORTANT
To ensure that the changes made in targetcli are persistent, start and enable the
target service:
195
Storage Administration Guide
Open port 3260 in the firewall and reload the firewall configuration:
Use the targetcli command, and then use the ls command for the layout of the tree interface:
# targetcli
:
/> ls
o- /........................................[...]
o- backstores.............................[...]
| o- block.................[Storage Objects: 0]
| o- fileio................[Storage Objects: 0]
| o- pscsi.................[Storage Objects: 0]
| o- ramdisk...............[Storage Ojbects: 0]
o- iscsi...........................[Targets: 0]
o- loopback........................[Targets: 0]
NOTE
In Red Hat Enterprise Linux 7.0, using the targetcli command from Bash, for example,
targetcli iscsi/ create, does not work and does not return an error. Starting with
Red Hat Enterprise Linux 7.1, an error status code is provided to make using targetcli
with shell scripts more useful.
196
CHAPTER 25. ONLINE STORAGE MANAGEMENT
NOTE
In Red Hat Enterprise Linux 6, the term 'backing-store' is used to refer to the mappings
created. However, to avoid confusion between the various ways 'backstores' can be used,
in Red Hat Enterprise Linux 7 the term 'storage objects' refers to the mappings created
and 'backstores' is used to describe the different types of backing devices.
To create a fileio storage object, run the command /backstores/fileio create file_name
file_location file_size write_back=false. For example:
NOTE
To create a BLOCK backstore using any block device, use the following command:
# fdisk /dev/vdb
Welcome to fdisk (util-linux 2.23.2).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
197
Storage Administration Guide
NOTE
WARNING
To create a PSCSI backstore for a physical SCSI device, a TYPE_ROM device using /dev/sr0 in
this example, use:
198
CHAPTER 25. ONLINE STORAGE MANAGEMENT
1. Run targetcli.
/> iscsi/
NOTE
/iscsi> create
Created target
iqn.2003-01.org.linux-iscsi.hostname.x8664:sn.78b473f296ff
Created TPG1
4. Verify that the newly created target is visible when targets are listed with ls.
/iscsi > ls
o- iscsi.......................................[1 Target]
o- iqn.2006-04.com.example:444................[1 TPG]
o- tpg1...........................[enabled, auth]
o- acls...............................[0 ACL]
o- luns...............................[0 LUN]
o- portals.........................[0 Portal]
NOTE
As of Red Hat Enterprise Linux 7.1, whenever a target is created, a default portal is also
created.
199
Storage Administration Guide
NOTE
As of Red Hat Enterprise Linux 7.1 when an iSCSI target is created, a default portal is
created as well. This portal is set to listen on all IP addresses with the default port number
(that is, 0.0.0.0:3260). To remove this and add only specified portals, use /iscsi/iqn-
name/tpg1/portals delete ip_address=0.0.0.0 ip_port=3260 then create a
new portal with the required information.
/iscsi> iqn.2006-04.example:444/tpg1/
2. There are two ways to create a portal: create a default portal, or create a portal specifying what
IP address to listen to.
Creating a default portal uses the default iSCSI port 3260 and allows the target to listen on all IP
addresses on that port.
To create a portal specifying what IP address to listen to, use the following command.
3. Verify that the newly created portal is visible with the ls command.
/iscsi/iqn.20...mple:444/tpg1> ls
o- tpg.................................. [enambled, auth]
o- acls ......................................[0 ACL]
o- luns ......................................[0 LUN]
o- portals ................................[1 Portal]
o- 192.168.122.137:3260......................[OK]
200
CHAPTER 25. ONLINE STORAGE MANAGEMENT
/iscsi/iqn.20...mple:444/tpg1> ls
o- tpg.................................. [enambled, auth]
o- acls ......................................[0 ACL]
o- luns .....................................[3 LUNs]
| o- lun0.........................[ramdisk/ramdisk1]
| o- lun1.................[block/block1 (/dev/vdb1)]
| o- lun2...................[fileio/file1 (/foo.img)]
o- portals ................................[1 Portal]
o- 192.168.122.137:3260......................[OK]
NOTE
Be aware that the default LUN name starts at 0, as opposed to 1 as was the case
when using tgtd in Red Hat Enterprise Linux 6.
IMPORTANT
By default, LUNs are created with read-write permissions. In the event that a new LUN is
added after ACLs have been created that LUN will be automatically mapped to all
available ACLs. This can cause a security risk. Use the following procedure to create a
LUN as read-only.
1. To create a LUN with read-only permissions, first use the following command:
This prevents the auto mapping of LUNs to existing ACLs allowing the manual mapping of LUNs.
/> iscsi/iqn.2015-06.com.redhat:target/tpg1/acls/iqn.2015-
06.com.redhat:initiator/ create mapped_lun=1
tpg_lun_or_backstore=/backstores/block/block2 write_protect=1
Created LUN 1.
Created Mapped LUN 1.
/> ls
o- / ...................................................... [...]
o- backstores ........................................... [...]
201
Storage Administration Guide
<snip>
o- iscsi ......................................... [Targets: 1]
| o- iqn.2015-06.com.redhat:target .................. [TPGs: 1]
| o- tpg1 ............................ [no-gen-acls, no-auth]
| o- acls ....................................... [ACLs: 2]
| | o- iqn.2015-06.com.redhat:initiator .. [Mapped LUNs: 2]
| | | o- mapped_lun0 .............. [lun0 block/disk1 (rw)]
| | | o- mapped_lun1 .............. [lun1 block/disk2 (ro)]
| o- luns ....................................... [LUNs: 2]
| | o- lun0 ...................... [block/disk1 (/dev/vdb)]
| | o- lun1 ...................... [block/disk2 (/dev/vdc)]
<snip>
The mapped_lun1 line now has (ro) at the end (unlike mapped_lun0's (rw)) stating that it is read-
only.
/iscsi/iqn.20...mple:444/tpg1> acls/
NOTE
The given example's behavior depends on the setting used. In this case, the
global setting auto_add_mapped_luns is used. This automatically maps LUNs
to any created ACL.
You can set user-created ACLs within the TPG node on the target server:
202
CHAPTER 25. ONLINE STORAGE MANAGEMENT
/iscsi/iqn.20...444/tpg1/acls> ls
o- acls .................................................[1 ACL]
o- iqn.2006-04.com.example.foo:888 ....[3 Mapped LUNs, auth]
o- mapped_lun0 .............[lun0 ramdisk/ramdisk1 (rw)]
o- mapped_lun1 .................[lun1 block/block1 (rw)]
o- mapped_lun2 .................[lun2 fileio/file1 (rw)]
IMPORTANT
Before proceeding, refer to Section 25.4, “Configuring a Fibre Channel over Ethernet
Interface” and verify that basic FCoE setup is completed, and that fcoeadm -i displays
configured FCoE interfaces.
1. Setting up an FCoE target requires the installation of the targetcli package, along with its
dependencies. Refer to Section 25.1, “Target Setup” for more information on targetcli basics
and set up.
If FCoE interfaces are present on the system, tab-completing after create will list available
interfaces. If not, ensure fcoeadm -i shows active interfaces.
/> tcm_fc/00:11:22:33:44:55:66:77
5. To make the changes persistent across reboots, use the saveconfig command and type yes
when prompted. If this is not done the configuration will be lost after rebooting.
203
Storage Administration Guide
/> /backstores/backstore-type/backstore-name
To remove parts of an iSCSI target, such as an ACL, use the following command:
To remove the entire target, including all ACLs, LUNs, and portals, use the following command:
man targetcli
The targetcli man page. It includes an example walk through.
NOTE
This was uploaded on February 28, 2012. As such, the service name has changed
from targetcli to target.
In Red Hat Enterprise Linux 7, the iSCSI service is lazily started by default: the service starts after
running the iscsiadm command.
1. Install iscsi-initiator-utils:
2. If the ACL was given a custom name in Section 25.1.6, “Configuring ACLs”, modify the
/etc/iscsi/initiatorname.iscsi file accordingly. For example:
204
CHAPTER 25. ONLINE STORAGE MANAGEMENT
# cat /etc/iscsi/initiatorname.iscsi
InitiatorName=iqn.2006-04.com.example.node1
# vi /etc/iscsi/initiatorname.iscsi
4. Log in to the target with the target IQN you discovered in step 3:
This procedure can be followed for any number of initators connected to the same LUN so long
as their specific initiator names are added to the ACL as described in Section 25.1.6,
“Configuring ACLs”.
5. Find the iSCSI disk name and create a file system on this iSCSI dick:
# mkfs.ext4 /dev/disk_name
# mkdir /mount/point
# mount /dev/disk_name /mount/point
7. Edit the /etc/fstab to mount the file system automatically when the system boots:
# vim /etc/fstab
/dev/disk_name /mount/point ext4 _netdev 0 0
205
Storage Administration Guide
IMPORTANT
If your system is using multipath software, Red Hat recommends that you consult your
hardware vendor before changing any of the values described in this section.
Transport: /sys/class/fc_transport/targetH:B:T/
port_id
node_name
port_name
dev_loss_tmo: controls when the scsi device gets removed from the system. After
dev_loss_tmo triggers, the scsi device is removed.
In multipath.conf, you can set dev_loss_tmo to infinity, which sets its value to
2,147,483,647 seconds, or 68 years, and is the maximum dev_loss_tmo value.
In Red Hat Enterprise Linux 7, if you do not set the fast_io_fail_tmo option,
dev_loss_tmo is capped to 600 seconds. By default, fast_io_fail_tmo is set to 5
seconds in Red Hat Enterprise Linux 7 if the multipathd service is running; otherwise, it is
set to off.
If I/O is in a blocked queue, it will not be failed until dev_loss_tmo expires and the queue is
unblocked.
Host: /sys/class/fc_host/hostH/
port_id
206
CHAPTER 25. ONLINE STORAGE MANAGEMENT
lpfc
qla2xxx
zfcp
bfa
IMPORTANT
The qla2xxx driver runs in initiator mode by default. To use qla2xxx with Linux-IO, enable
Fibre Channel target mode with the corresponding qlini_mode module parameter.
First, make sure that the firmware package for your qla device, such as ql2200-firmware
or similar, is installed.
Then, use the dracut -f command to rebuild the initial ramdisk (initrd), and reboot
the system for the changes to take effect.
Table 25.1, “Fibre Channel API Capabilities” describes the different Fibre Channel API capabilities of
each native Red Hat Enterprise Linux 7 driver. X denotes support for the capability.
Transport X X X X
port_id
Transport X X X X
node_name
Transport X X X X
port_name
Remote Port X X X X
dev_loss_tmo
207
Storage Administration Guide
Host port_id X X X X
Host issue_lip X X X
fcoe-utils
lldpad
Once these packages are installed, perform the following procedure to enable FCoE over a virtual LAN
(VLAN):
1. To configure a new VLAN, make a copy of an existing network script, for example
/etc/fcoe/cfg-eth0, and change the name to the Ethernet device that supports FCoE. This
provides you with a default file to configure. Given that the FCoE device is ethX, run:
# cp /etc/fcoe/cfg-ethx /etc/fcoe/cfg-ethX
Modify the contents of cfg-ethX as needed. Notably, set DCB_REQUIRED to no for networking
interfaces that implement a hardware Data Center Bridging Exchange (DCBX) protocol client.
2. If you want the device to automatically load during boot time, set ONBOOT=yes in the
corresponding /etc/sysconfig/network-scripts/ifcfg-ethX file. For example, if the
FCoE device is eth2, edit /etc/sysconfig/network-scripts/ifcfg-eth2 accordingly.
4. For networking interfaces that implement a hardware DCBX client, skip this step.
For interfaces that require a software DCBX client, enable data center bridging on the Ethernet
interface by running:
208
CHAPTER 25. ONLINE STORAGE MANAGEMENT
Note that these commands only work if the dcbd settings for the Ethernet interface were not
changed.
The FCoE device appears soon if all other settings on the fabric are correct. To view configured
FCoE devices, run:
# fcoeadm -i
After correctly configuring the Ethernet interface to use FCoE, Red Hat recommends that you set FCoE
and the lldpad service to run at startup. To do so, use the systemctl utility:
NOTE
Running the # systemctl stop fcoe command stops the daemon, but does not reset
the configuration of FCoE interfaces. To do so, run the # systemctl -s SIGHUP
kill fcoe command.
As of Red Hat Enterprise Linux 7, Network Manager has the ability to query and set the DCB settings of
a DCB capable Ethernet interface.
NOTE
You can mount newly discovered disks via udev rules, autofs, and other similar methods. Sometimes,
however, a specific service might require the FCoE disk to be mounted at boot-time. In such cases, the
FCoE disk should be mounted as soon as the fcoe service runs and before the initiation of any service
that requires the FCoE disk.
209
Storage Administration Guide
To configure an FCoE disk to automatically mount at boot, add proper FCoE mounting code to the
startup script for the fcoe service. The fcoe startup script is
/lib/systemd/system/fcoe.service.
The FCoE mounting code is different per system configuration, whether you are using a simple formatted
FCoE disk, LVM, or multipathed device node.
The following is a sample FCoE mounting code for mounting file systems specified via wild cards in
/etc/fstab:
mount_fcoe_disks_from_fstab()
{
local timeout=20
local done=1
local fcoe_disks=($(egrep 'by-path\/fc-.*_netdev' /etc/fstab | cut
-d ' ' -f1))
The mount_fcoe_disks_from_fstab function should be invoked after the fcoe service script starts
the fcoemon daemon. This will mount FCoE disks specified by the following paths in /etc/fstab:
210
CHAPTER 25. ONLINE STORAGE MANAGEMENT
Entries with fc- and _netdev sub-strings enable the mount_fcoe_disks_from_fstab function to
identify FCoE disk mount entries. For more information on /etc/fstab entries, refer to man 5 fstab.
NOTE
The fcoe service does not implement a timeout for FCoE disk discovery. As such, the
FCoE mounting code should implement its own timeout period.
25.6. ISCSI
This section describes the iSCSI API and the iscsiadm utility. Before using the iscsiadm utility, install
the iscsi-initiator-utils package first by running yum install iscsi-initiator-utils.
In Red Hat Enterprise Linux 7, the iSCSI service is lazily started by default. If root is not on an iSCSI
device or there are no nodes marked with node.startup = automatic then the iSCSI service will
not start until an iscsiadm command is run that requires iscsid or the iscsi kernel modules to be started.
For example, running the discovery command iscsiadm -m discovery -t st -p ip:port will
cause iscsiadmin to start the iSCSI service.
To force the iscsid daemon to run and iSCSI kernel modules to load, run systemctl start
iscsid.service.
# iscsiadm -m session -P 3
This command displays the session/device state, session ID (sid), some negotiated parameters, and the
SCSI devices accessible through the session.
For shorter output (for example, to display only the sid-to-node mapping), run:
# iscsiadm -m session -P 0
or
# iscsiadm -m session
These commands print the list of running sessions with the format:
For example:
# iscsiadm -m session
211
Storage Administration Guide
The major and minor number range and associated sd names are allocated for each device when it is
detected. This means that the association between the major and minor number range and associated
sd names can change if the order of device detection changes. Although this is unusual with some
hardware configurations (for example, with an internal SCSI controller and disks that have their SCSI
target ID assigned by their physical location within a chassis), it can nevertheless occur. Examples of
situations where this can happen are as follows:
A disk may fail to power up or respond to the SCSI controller. This will result in it not being
detected by the normal device probe. The disk will not be accessible to the system and
subsequent devices will have their major and minor number range, including the associated sd
names shifted down. For example, if a disk normally referred to as sdb is not detected, a disk
that is normally referred to as sdc would instead appear as sdb.
A SCSI controller (host bus adapter, or HBA) may fail to initialize, causing all disks connected to
that HBA to not be detected. Any disks connected to subsequently probed HBAs would be
assigned different major and minor number ranges, and different associated sd names.
The order of driver initialization could change if different types of HBAs are present in the
system. This would cause the disks connected to those HBAs to be detected in a different order.
This can also occur if HBAs are moved to different PCI slots on the system.
Disks connected to the system with Fibre Channel, iSCSI, or FCoE adapters might be
inaccessible at the time the storage devices are probed, due to a storage array or intervening
switch being powered off, for example. This could occur when a system reboots after a power
failure, if the storage array takes longer to come online than the system take to boot. Although
some Fibre Channel drivers support a mechanism to specify a persistent SCSI target ID to
WWPN mapping, this will not cause the major and minor number ranges, and the associated sd
names to be reserved, it will only provide consistent SCSI target ID numbers.
These reasons make it undesirable to use the major and minor number range or the associated sd
names when referring to devices, such as in the /etc/fstab file. There is the possibility that the wrong
device will be mounted and data corruption could result.
212
CHAPTER 25. ONLINE STORAGE MANAGEMENT
Occasionally, however, it is still necessary to refer to the sd names even when another mechanism is
used (such as when errors are reported by a device). This is because the Linux kernel uses sd names
(and also SCSI host/channel/target/LUN tuples) in kernel messages regarding the device.
This identifier can be obtained by issuing a SCSI Inquiry to retrieve the Device Identification Vital
Product Data (page 0x83) or Unit Serial Number (page 0x80). The mappings from these WWIDs to the
current /dev/sd names can be seen in the symlinks maintained in the /dev/disk/by-id/ directory.
Red Hat Enterprise Linux automatically maintains the proper mapping from the WWID-based device
name to a current /dev/sd name on that system. Applications can use the /dev/disk/by-id/ name
to reference the data on the disk, even if the path to the device changes, and even when accessing the
device from different systems.
If there are multiple paths from a system to a device, DM Multipath uses the WWID to detect this. DM
Multipath then presents a single "pseudo-device" in the /dev/mapper/wwid directory, such as
/dev/mapper/3600508b400105df70000e00000ac0000.
DM Multipath automatically maintains the proper mapping of each WWID-based device name to its
corresponding /dev/sd name on the system. These names are persistent across path changes, and
they are consistent when accessing the device from different systems.
213
Storage Administration Guide
When the user_friendly_names feature (of DM Multipath) is used, the WWID is mapped to a name
of the form /dev/mapper/mpathn. By default, this mapping is maintained in the file
/etc/multipath/bindings. These mpathn names are persistent as long as that file is maintained.
IMPORTANT
In addition to these persistent names provided by the system, you can also use udev rules to implement
persistent names of your own, mapped to the WWID of the storage.
The kernel
Generates events that are sent to user space when devices are added, removed, or changed.
This mechanism is used for all types of devices in Linux, not just for storage devices. In the case of
storage devices, Red Hat Enterprise Linux contains udev rules that create symbolic links in the
/dev/disk/ directory allowing storage devices to be referred to by their contents, a unique identifier,
their serial number, or the hardware path used to access the device.
/dev/disk/by-label/
Entries in this directory provide a symbolic name that refers to the storage device by a label in the
contents (that is, the data) stored on the device. The blkid utility is used to read data from the device
and determine a name (that is, a label) for the device. For example:
/dev/disk/by-label/Boot
NOTE
The information is obtained from the contents (that is, the data) on the device so if the
contents are copied to another device, the label will remain the same.
The label can also be used to refer to the device in /etc/fstab using the following syntax:
LABEL=Boot
/dev/disk/by-uuid/
214
CHAPTER 25. ONLINE STORAGE MANAGEMENT
Entries in this directory provide a symbolic name that refers to the storage device by a unique
identifier in the contents (that is, the data) stored on the device. The blkid utility is used to read data
from the device and obtain a unique identifier (that is, the UUID) for the device. For example:
UUID=3e6be9de-8139-11d1-9106-a43f08d823a6
/dev/disk/by-id/
Entries in this directory provide a symbolic name that refers to the storage device by a unique
identifier (different from all other storage devices). The identifier is a property of the device but is not
stored in the contents (that is, the data) on the devices. For example:
/dev/disk/by-id/scsi-3600508e000000000ce506dc50ab0ad05
/dev/disk/by-id/wwn-0x600508e000000000ce506dc50ab0ad05
The id is obtained from the world-wide ID of the device, or the device serial number. The
/dev/disk/by-id/ entries may also include a partition number. For example:
/dev/disk/by-id/scsi-3600508e000000000ce506dc50ab0ad05-part1
/dev/disk/by-id/wwn-0x600508e000000000ce506dc50ab0ad05-part1
/dev/disk/by-path/
Entries in this directory provide a symbolic name that refers to the storage device by the hardware
path used to access the device, beginning with a reference to the storage controller in the PCI
hierarchy, and including the SCSI host, channel, target, and LUN numbers and, optionally, the
partition number. Although these names are preferable to using major and minor numbers or sd
names, caution must be used to ensure that the target numbers do not change in a Fibre Channel
SAN environment (for example, through the use of persistent binding) and that the use of the names
is updated if a host adapter is moved to a different PCI slot. In addition, there is the possibility that the
SCSI host numbers could change if a HBA fails to probe, if drivers are loaded in a different order, or
if a new HBA is installed on the system. An example of by-path listing is:
/dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:0
The /dev/disk/by-path/ entries may also include a partition number, such as:
/dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:0-part1
It is possible that the device may not be accessible at the time the query is performed because
the udev mechanism may rely on the ability to query the storage device when the udev rules
are processed for a udev event. This is more likely to occur with Fibre Channel, iSCSI or FCoE
storage devices when the device is not located in the server chassis.
The kernel may also send udev events at any time, causing the rules to be processed and
possibly causing the /dev/disk/by-*/ links to be removed if the device is not accessible.
215
Storage Administration Guide
There can be a delay between when the udev event is generated and when it is processed,
such as when a large number of devices are detected and the user-space udevd service takes
some amount of time to process the rules for each one). This could cause a delay between when
the kernel detects the device and when the /dev/disk/by-*/ names are available.
External programs such as blkid invoked by the rules may open the device for a brief period of
time, making the device inaccessible for other uses.
Although udev naming attributes are persistent, in that they do not change on their own across system
reboots, some are also configurable. You can set custom values for the following persistent naming
attributes:
Because the UUID and LABEL attributes are related to the file system, the tool you need to use depends
on the file system on that partition.
To change the UUID or LABEL attributes of an XFS file system, unmount the file system and then
use the xfs_admin utility to change the attribute:
# umount /dev/device
# xfs_admin [-U new_uuid] [-L new_label] /dev/device
# udevadm settle
To change the UUID or LABEL attributes of an ext4, ext3, or ext2 file system, use the tune2fs
utility:
Replace new_uuid with the UUID you want to set; for example, 1cdfbc07-1c90-4984-b5ec-
f61943f5ea50. Replace new_label with a label; for example, backup_data.
NOTE
Changing udev attributes happens in the background and might take a long time. The
udevadm settle command waits until the change is fully registered, which ensures that
your next command will be able to utilize the new attribute correctly.
You should also use the command after creating new devices; for example, after using the
parted tool to create a partition with a custom PARTUUID or PARTLABEL attribute, or after
creating a new file system.
216
CHAPTER 25. ONLINE STORAGE MANAGEMENT
Identifier (WWID)”) and each of the identifiers that represent a path to the device. If you are only
removing a path to a multipath device, and other paths will remain, then the procedure is simpler, as
described in Section 25.10, “Adding a Storage Device or Path”.
Removal of a storage device is not recommended when the system is under memory pressure, since the
I/O flush will add to the load. To determine the level of memory pressure, run the command vmstat 1
100; device removal is not recommended if:
Free memory is less than 5% of the total memory in more than 10 samples per 100 (the
command free can also be used to display the total memory).
1. Close all users of the device and backup device data as needed.
2. Use umount to unmount any file systems that mounted the device.
3. Remove the device from any md and LVM volume using it. If the device is a member of an LVM
Volume group, then it may be necessary to move data off the device using the pvmove
command, then use the vgreduce command to remove the physical volume, and (optionally)
pvremove to remove the LVM metadata from the disk.
4. If the device uses multipathing, run multipath -l and note all the paths to the device.
Afterwards, remove the multipathed device using multipath -f device.
5. Run blockdev --flushbufs device to flush any outstanding I/O to all paths to the device.
This is particularly important for raw devices, where there is no umount or vgreduce operation
to cause an I/O flush.
6. Remove any reference to the device's path-based name, like /dev/sd, /dev/disk/by-path
or the major:minor number, in applications, scripts, or utilities on the system. This is important
in ensuring that different devices added in the future will not be mistaken for the current device.
7. Finally, remove each path to the device from the SCSI subsystem. To do so, use the command
echo 1 > /sys/block/device-name/device/delete where device-name may be sde,
for example.
NOTE
You can determine the device-name, HBA number, HBA channel, SCSI target ID and LUN for a device
from various commands, such as lsscsi, scsi_id, multipath -l, and ls -l /dev/disk/by-*.
217
Storage Administration Guide
After performing Procedure 25.9, “Ensuring a Clean Device Removal”, a device can be physically
removed safely from a running system. It is not necessary to stop I/O to other devices while doing so.
Other procedures, such as the physical removal of the device, followed by a rescan of the SCSI bus (as
described in Section 25.11, “Scanning Storage Interconnects”) to cause the operating system state to be
updated to reflect the change, are not recommended. This will cause delays due to I/O timeouts, and
devices may be removed unexpectedly. If it is necessary to perform a rescan of an interconnect, it must
be done while I/O is paused, as described in Section 25.11, “Scanning Storage Interconnects”.
1. Remove any reference to the device's path-based name, like /dev/sd or /dev/disk/by-
path or the major:minor number, in applications, scripts, or utilities on the system. This is
important in ensuring that different devices added in the future will not be mistaken for the
current device.
This will cause any subsequent I/O sent to the device on this path to be failed immediately.
Device-mapper-multipath will continue to use the remaining paths to the device.
3. Remove the path from the SCSI subsystem. To do so, use the command echo 1 >
/sys/block/device-name/device/delete where device-name may be sde, for
example (as described in Procedure 25.9, “Ensuring a Clean Device Removal”).
After performing Procedure 25.10, “Removing a Path to a Storage Device”, the path can be safely
removed from the running system. It is not necessary to stop I/O while this is done, as device-mapper-
multipath will re-route I/O to remaining paths according to the configured path grouping and failover
policies.
Other procedures, such as the physical removal of the cable, followed by a rescan of the SCSI bus to
cause the operating system state to be updated to reflect the change, are not recommended. This will
cause delays due to I/O timeouts, and devices may be removed unexpectedly. If it is necessary to
perform a rescan of an interconnect, it must be done while I/O is paused, as described in Section 25.11,
“Scanning Storage Interconnects”.
1. The first step in adding a storage device or path is to physically enable access to the new
storage device, or a new path to an existing device. This is done using vendor-specific
commands at the Fibre Channel or iSCSI storage server. When doing so, note the LUN value for
the new storage that will be presented to your host. If the storage server is Fibre Channel, also
218
CHAPTER 25. ONLINE STORAGE MANAGEMENT
take note of the World Wide Node Name (WWNN) of the storage server, and determine whether
there is a single WWNN for all ports on the storage server. If this is not the case, note the World
Wide Port Name (WWPN) for each port that will be used to access the new LUN.
2. Next, make the operating system aware of the new storage device, or path to an existing device.
The recommended command to use is:
In the previous command, h is the HBA number, c is the channel on the HBA, t is the SCSI
target ID, and l is the LUN.
NOTE
a. In some Fibre Channel hardware, a newly created LUN on the RAID array may not be visible
to the operating system until a Loop Initialization Protocol (LIP) operation is performed. Refer
to Section 25.11, “Scanning Storage Interconnects” for instructions on how to do this.
IMPORTANT
b. If a new LUN has been added on the RAID array but is still not being configured by the
operating system, confirm the list of LUNs being exported by the array using the sg_luns
command, part of the sg3_utils package. This will issue the SCSI REPORT LUNS command
to the RAID array and return a list of LUNs that are present.
For Fibre Channel storage servers that implement a single WWNN for all ports, you can
determine the correct h,c,and t values (i.e. HBA number, HBA channel, and SCSI target ID) by
searching for the WWNN in sysfs.
/sys/class/fc_transport/target5:0:2/node_name:0x5006016090203181
/sys/class/fc_transport/target5:0:3/node_name:0x5006016090203181
/sys/class/fc_transport/target6:0:2/node_name:0x5006016090203181
/sys/class/fc_transport/target6:0:3/node_name:0x5006016090203181
This indicates there are four Fibre Channel routes to this target (two single-channel HBAs,
each leading to two storage ports). Assuming a LUN value is 56, then the following command
will configure the first path:
219
Storage Administration Guide
For Fibre Channel storage servers that do not implement a single WWNN for all ports, you can
determine the correct HBA number, HBA channel, and SCSI target ID by searching for each of
the WWPNs in sysfs.
Another way to determine the HBA number, HBA channel, and SCSI target ID is to refer to
another device that is already configured on the same path as the new device. This can be done
with various commands, such as lsscsi, scsi_id, multipath -l, and ls -l
/dev/disk/by-*. This information, plus the LUN number of the new device, can be used as
shown above to probe and configure that path to the new device.
3. After adding all the SCSI paths to the device, execute the multipath command, and check to
see that the device has been properly configured. At this point, the device can be added to md,
LVM, mkfs, or mount, for example.
If the steps above are followed, then a device can safely be added to a running system. It is not
necessary to stop I/O to other devices while this is done. Other procedures involving a rescan (or a
reset) of the SCSI bus, which cause the operating system to update its state to reflect the current device
connectivity, are not recommended while storage I/O is in progress.
All I/O on the effected interconnects must be paused and flushed before executing the
procedure, and the results of the scan checked before I/O is resumed.
As with removing a device, interconnect scanning is not recommended when the system is
under memory pressure. To determine the level of memory pressure, run the vmstat 1 100
command. Interconnect scanning is not recommended if free memory is less than 5% of the total
memory in more than 10 samples per 100. Also, interconnect scanning is not recommended if
swapping is active (non-zero si and so columns in the vmstat output). The free command
can also display the total memory.
Note that issue_lip is an asynchronous operation. The command can complete before the entire
scan has completed. You must monitor /var/log/messages to determine when issue_lip
finishes.
220
CHAPTER 25. ONLINE STORAGE MANAGEMENT
The lpfc, qla2xxx, and bnx2fc drivers support issue_lip. For more information about the API
capabilities supported by each driver in Red Hat Enterprise Linux, see Table 25.1, “Fibre Channel
API Capabilities”.
/usr/bin/rescan-scsi-bus.sh
The /usr/bin/rescan-scsi-bus.sh script was introduced in Red Hat Enterprise Linux 5.4. By
default, this script scans all the SCSI buses on the system, and updates the SCSI layer to reflect new
devices on the bus. The script provides additional options to allow device removal, and the issuing of
LIPs. For more information about this script, including known issues, see Section 25.17,
“Adding/Removing a Logical Unit Through rescan-scsi-bus.sh”.
During target discovery, the iscsiadm tool uses the settings in /etc/iscsi/iscsid.conf to create
two types of records:
Before using different settings for discovery, delete the current discovery records (i.e.
/var/lib/iscsi/discovery_type) first. To do this, use the following command: [5]
For details on different types of discovery, refer to the DISCOVERY TYPES section of the iscsiadm(8)
man page.
221
Storage Administration Guide
Alternatively, iscsiadm can also be used to directly change discovery record settings, as in:
Refer to the iscsiadm(8) man page for more information on available setting options and valid
value options for each.
After configuring discovery settings, any subsequent attempts to discover new targets will use the new
settings. Refer to Section 25.14, “Scanning iSCSI Interconnects” for details on how to scan for new
iSCSI targets.
For more information on configuring iSCSI target discovery, refer to the man pages of iscsiadm and
iscsid. The /etc/iscsi/iscsid.conf file also contains examples on proper configuration syntax.
The network subsystem can be configured to determine the path/NIC that iSCSI interfaces should use
for binding. For example, if portals and NICs are set up on different subnets, then it is not necessary to
manually configure iSCSI interfaces for binding.
Before attempting to configure an iSCSI interface for binding, run the following command first:
If ping fails, then you will not be able to bind a session to a NIC. If this is the case, check the network
settings first.
Software iSCSI
This stack allocates an iSCSI host instance (that is, scsi_host) per session, with a single
connection per session. As a result, /sys/class_scsi_host and /proc/scsi will report a
scsi_host for each connection/session you are logged into.
Offload iSCSI
This stack allocates a scsi_host for each PCI device. As such, each port on a host bus adapter will
show up as a different PCI device, with a different scsi_host per HBA port.
222
CHAPTER 25. ONLINE STORAGE MANAGEMENT
To manage both types of initiator implementations, iscsiadm uses the iface structure. With this
structure, an iface configuration must be entered in /var/lib/iscsi/ifaces for each HBA port,
software iSCSI, or network device (ethX) used to bind sessions.
To view available iface configurations, run iscsiadm -m iface. This will display iface information
in the following format:
iface_name
transport_name,hardware_address,ip_address,net_ifacename,initiator_name
Setting Description
iface0 qla4xxx,00:c0:dd:08:63:e8,20.15.0.7,default,iqn.2005-
06.com.redhat:madmax
iface1 qla4xxx,00:c0:dd:08:63:ea,20.15.0.9,default,iqn.2005-
06.com.redhat:madmax
For software iSCSI, each iface configuration must have a unique name (with less than 65 characters).
The iface_name for network devices that support offloading appears in the format
transport_name.hardware_name.
For example, the sample output of iscsiadm -m iface on a system using a Chelsio network card
might appear as:
223
Storage Administration Guide
default tcp,<empty>,<empty>,<empty>,<empty>
iser iser,<empty>,<empty>,<empty>,<empty>
cxgb3i.00:07:43:05:97:07 cxgb3i,00:07:43:05:97:07,<empty>,<empty>,
<empty>
It is also possible to display the settings of a specific iface configuration in a more friendly way. To do
so, use the option -I iface_name. This will display the settings in the following format:
iface.setting = value
Example 25.8. Using iface Settings with a Chelsio Converged Network Adapter
Using the previous example, the iface settings of the same Chelsio converged network adapter (i.e.
iscsiadm -m iface -I cxgb3i.00:07:43:05:97:07) would appear as:
Before
To create an iface configuration for software iSCSI, run the following command:
This will create a new empty iface configuration with a specified iface_name. If an existing iface
configuration already has the same iface_name, then it will be overwritten with a new, empty one.
224
CHAPTER 25. ONLINE STORAGE MANAGEMENT
WARNING
Do not use default or iser as iface names. Both strings are special values
used by iscsiadm for backward compatibility. Any manually-created iface
configurations named default or iser will disable backwards compatibility.
Before using the iface of a network card for iSCSI offload, first set the iface.ipaddress value of the
offload interface to the initiator IP address that the interface should use:
For devices that use the be2iscsi driver, the IP address is configured in the BIOS setup
screen.
For all other devices, to configure the IP address of the iface, use:
For example, to set the iface IP address to 20.15.0.66 when using a card with the iface name
of cxgb3i.00:07:43:05:97:07, use:
This behavior was implemented for compatibility reasons. To override this, use the -I iface_name to
specify which portal to bind to an iface, as in:
By default, the iscsiadm utility will not automatically bind any portals to iface configurations that use
225
Storage Administration Guide
offloading. This is because such iface configurations will not have iface.transport set to tcp. As
such, the iface configurations need to be manually bound to discovered portals.
It is also possible to prevent a portal from binding to any existing iface. To do so, use default as the
iface_name, as in:
To delete bindings for a specific portal (e.g. for Equalogic targets), use:
NOTE
However, if the targets do not send an iSCSI async event, you need to manually scan them using the
iscsiadm utility. Before doing so, however, you need to first retrieve the proper --targetname and
the --portal values. If your device model supports only a single logical unit and portal per target, use
iscsiadm to issue a sendtargets command to the host, as in:
target_IP:port,target_portal_group_tag proper_target_name
10.15.84.19:3260,2 iqn.1992-08.com.netapp:sn.33615311
226
CHAPTER 25. ONLINE STORAGE MANAGEMENT
10.15.85.19:3260,3 iqn.1992-08.com.netapp:sn.33615311
In this example, the target has two portals, each using target_ip:ports of 10.15.84.19:3260
and 10.15.85.19:3260.
To see which iface configuration will be used for each session, add the -P 1 option. This option will
print also session information in tree format, as in:
Target: proper_target_name
Portal: target_IP:port,target_portal_group_tag
Iface Name: iface_name
Target: iqn.1992-08.com.netapp:sn.33615311
Portal: 10.15.84.19:3260,2
Iface Name: iface2
Portal: 10.15.85.19:3260,3
Iface Name: iface2
This means that the target iqn.1992-08.com.netapp:sn.33615311 will use iface2 as its
iface configuration.
With some device models a single target may have multiple logical units and portals. In this case, issue a
sendtargets command to the host first to find new portals on the target. Then, rescan the existing
sessions using:
You can also rescan a specific session by specifying the session's SID value, as in:
If your device supports multiple targets, you will need to issue a sendtargets command to the hosts to
find new portals for each target. Rescan existing sessions to discover new logical units on existing
sessions using the --rescan option.
227
Storage Administration Guide
IMPORTANT
To safely add new targets/portals or delete old ones, use the -o new or -o delete
options, respectively. For example, to add new targets/portals without overwriting
/var/lib/iscsi/nodes, use the following command:
To delete /var/lib/iscsi/nodes entries that the target did not display during
discovery, use:
ip:port,target_portal_group_tag proper_target_name
For example, given a device with a single target, logical unit, and portal, with equallogic-iscsi1
as your target_name, the output should appear similar to the following:
10.16.41.155:3260,0 iqn.2001-05.com.equallogic:6-8a0900-ac3fe0101-
63aff113e344a4a2-dl585-03-1
At this point, you now have the proper --targetname and --portal values needed to manually scan
for iSCSI devices. To do so, run the following command:
228
CHAPTER 25. ONLINE STORAGE MANAGEMENT
When this command is executed, the iSCSI init scripts will automatically log into targets where the
node.startup setting is configured as automatic. This is the default value of node.startup for all
targets.
To prevent automatic login to a target, set node.startup to manual. To do this, run the following
command:
Deleting the entire record will also prevent automatic login. To do this, run:
To automatically mount a file system from an iSCSI device on the network, add a partition entry for the
mount in /etc/fstab with the _netdev option. For example, to automatically mount the iSCSI device
sdb to /mount/iscsi during startup, add the following line to /etc/fstab:
NOTE
229
Storage Administration Guide
To resize the online logical unit, start by modifying the logical unit size through the array management
interface of your storage device. This procedure differs with each array; as such, consult your storage
array vendor documentation for more information on this.
NOTE
In order to resize an online file system, the file system must not reside on a partitioned
device.
IMPORTANT
To re-scan Fibre Channel logical units on a system that uses multipathing, execute the
aforementioned command for each sd device (i.e. sd1, sd2, and so on) that represents a
path for the multipathed logical unit. To determine which devices are paths for a multipath
logical unit, use multipath -ll; then, find the entry that matches the logical unit being
resized. It is advisable that you refer to the WWID of each entry to make it easier to find
which one matches the logical unit being resized.
Replace target_name with the name of the target where the device is located.
NOTE
You can also re-scan iSCSI logical units using the following command:
Replace interface with the corresponding interface name of the resized logical unit (for
example, iface0). This command performs two operations:
It scans for new devices in the same way that the command echo "- - -" >
/sys/class/scsi_host/host/scan does (refer to Section 25.14, “Scanning
iSCSI Interconnects”).
It re-scans for new/modified logical units the same way that the command echo
1 > /sys/block/sdX/device/rescan does. Note that this command is the
same one used for re-scanning Fibre Channel logical units.
230
CHAPTER 25. ONLINE STORAGE MANAGEMENT
The multipath_device variable is the corresponding multipath entry of your device in /dev/mapper.
Depending on how multipathing is set up on your system, multipath_device can be either of two
formats:
mpathX, where X is the corresponding entry of your device (for example, mpath0)
To determine which multipath entry corresponds to your resized logical unit, run multipath -ll. This
displays a list of all existing multipath entries in the system, along with the major and minor numbers of
their corresponding devices.
IMPORTANT
For more information about multipathing, refer to the Red Hat Enterprise Linux 7 DM Multipath guide.
Run the following command, replacing XYZ with the desired device designator, to determine the
operating system's current view of the R/W state of a device:
The following command is also available for Red Hat Enterprise Linux 7:
When using multipath, refer to the ro or rw field in the second line of output from the multipath -ll
command. For example:
231
Storage Administration Guide
To move the device from R/W to RO, ensure no further writes will be issued. Do this by stopping
the application, or through the use of an appropriate, application-specific action.
Ensure that all outstanding write I/Os are complete with the following command:
Replace device with the desired designator; for a device mapper multipath, this is the entry for
your device in dev/mapper. For example, /dev/mapper/mpath3.
2. Use the management interface of the storage device to change the state of the logical unit from
R/W to RO, or from RO to R/W. The procedure for this differs with each array. Consult
applicable storage array vendor documentation for more information.
3. Perform a re-scan of the device to update the operating system's view of the R/W state of the
device. If using a device mapper multipath, perform this re-scan for each path to the device
before issuing the command telling multipath to reload its device maps.
This process is explained in further detail in Section 25.16.4.1, “Rescanning Logical Units” .
After modifying the online logical unit Read/Write state, as described in Section 25.16.4, “Changing the
Read/Write State of an Online Logical Unit”, re-scan the logical unit to ensure the system detects the
updated state with the following command:
To re-scan logical units on a system that uses multipathing, execute the above command for each sd
device that represents a path for the multipathed logical unit. For example, run the command on sd1, sd2
and all other sd devices. To determine which devices are paths for a multipath unit, use multipath -
11, then find the entry that matches the logical unit to be changed.
For example, the multipath -11 above shows the path for the LUN with WWID
36001438005deb4710000500000640000. In this case, enter:
232
CHAPTER 25. ONLINE STORAGE MANAGEMENT
If multipathing is enabled, after rescanning the logical unit, the change in its state will need to be reflected
in the logical unit's corresponding multipath drive. Do this by reloading the multipath device maps with
the following command:
# multipath -r
The multipath -11 command can then be used to confirm the change.
25.16.4.3. Documentation
Further information can be found in the Red Hat Knowledgebase. To access this, navigate to
https://www.redhat.com/wapps/sso/login.html?redirect=https://access.redhat.com/knowledge/ and log in.
Then access the article at https://access.redhat.com/kb/docs/DOC-32850.
In order for rescan-scsi-bus.sh to work properly, LUN0 must be the first mapped logical
unit. The rescan-scsi-bus.sh can only detect the first mapped logical unit if it is LUN0. The
rescan-scsi-bus.sh will not be able to scan any other logical unit unless it detects the first
mapped logical unit even if you use the --nooptscan option.
A race condition requires that rescan-scsi-bus.sh be run twice if logical units are mapped
for the first time. During the first scan, rescan-scsi-bus.sh only adds LUN0; all other logical
units are added in the second scan.
A bug in the rescan-scsi-bus.sh script incorrectly executes the functionality for recognizing
a change in logical unit size when the --remove option is used.
The rescan-scsi-bus.sh script does not recognize ISCSI logical unit removals.
233
Storage Administration Guide
If a driver implements the Transport dev_loss_tmo callback, access attempts to a device through a link
will be blocked when a transport problem is detected. To verify if a device is blocked, run the following
command:
$ cat /sys/block/device/device/state
This command will return blocked if the device is blocked. If the device is operating normally, this
command will return running.
$ cat
/sys/class/fc_remote_port/rport-H:B:R/port_state
2. This command will return Blocked when the remote port (along with devices accessed through
it) are blocked. If the remote port is operating normally, the command will return Online.
3. If the problem is not resolved within dev_loss_tmo seconds, the rport and devices will be
unblocked and all I/O running on that device (along with any new I/O sent to that device) will be
failed.
To change the dev_loss_tmo value, echo in the desired value to the file. For example, to set
dev_loss_tmo to 30 seconds, run:
$ echo 30 >
/sys/class/fc_remote_port/rport-H:B:R/dev_loss_tmo
For more information about dev_loss_tmo, refer to Section 25.3.1, “Fibre Channel API”.
When a link loss exceeds dev_loss_tmo, the scsi_device and sdN devices are removed. Typically,
the Fibre Channel class will leave the device as is; i.e. /dev/sdx will remain /dev/sdx. This is
because the target binding is saved by the Fibre Channel driver so when the target port returns, the
SCSI addresses are recreated faithfully. However, this cannot be guaranteed; the sdx will be restored
only if no additional change on in-storage box configuration of LUNs is made.
This ensures that I/O errors are retried and queued if all paths are failed in the dm-multipath layer.
You may need to adjust iSCSI timers further to better monitor your SAN for problems. Available iSCSI
timers you can configure are NOP-Out Interval/Timeouts and replacement_timeout, which are
discussed in the following sections.
234
CHAPTER 25. ONLINE STORAGE MANAGEMENT
To help monitor problems the SAN, the iSCSI layer sends a NOP-Out request to each target. If a NOP-
Out request times out, the iSCSI layer responds by failing any running commands and instructing the
SCSI layer to requeue those commands when possible.
When dm-multipath is being used, the SCSI layer will fail those running commands and defer them to
the multipath layer. The multipath layer then retries those commands on another path. If dm-multipath
is not being used, those commands are retried five times before failing altogether.
Intervals between NOP-Out requests are 10 seconds by default. To adjust this, open
/etc/iscsi/iscsid.conf and edit the following line:
Once set, the iSCSI layer will send a NOP-Out request to each target every [interval value] seconds.
This sets the iSCSI layer to timeout a NOP-Out request after [timeout value] seconds.
If the SCSI Error Handler is running, running commands on a path will not be failed immediately when a
NOP-Out request times out on that path. Instead, those commands will be failed after
replacement_timeout seconds. For more information about replacement_timeout, refer to
Section 25.18.2.2, “replacement_timeout”.
# iscsiadm -m session -P 3
25.18.2.2. replacement_timeout
replacement_timeout controls how long the iSCSI layer should wait for a timed-out path/session to
reestablish itself before failing any commands on it. The default replacement_timeout value is 120
seconds.
node.session.timeo.replacement_timeout = [replacement_timeout]
By configuring a lower replacement_timeout, I/O is quickly sent to a new path and executed (in the
event of a NOP-Out timeout) while the iSCSI layer attempts to re-establish the failed path/session. If all
paths time out, then the multipath and device mapper layer will internally queue I/O based on the settings
235
Storage Administration Guide
IMPORTANT
Whether your considerations are failover speed or security, the recommended value for
replacement_timeout will depend on other factors. These factors include the network,
target, and system workload. As such, it is recommended that you thoroughly test any
new configurations to replacements_timeout before applying it to a mission-critical
system.
To start with, NOP-Outs should be disabled. You can do this by setting both NOP-Out interval and
timeout to zero. To set this, open /etc/iscsi/iscsid.conf and edit as follows:
node.conn[0].timeo.noop_out_interval = 0
node.conn[0].timeo.noop_out_timeout = 0
In line with this, replacement_timeout should be set to a high number. This will instruct the system to
wait a long time for a path/session to reestablish itself. To adjust replacement_timeout, open
/etc/iscsi/iscsid.conf and edit the following line:
node.session.timeo.replacement_timeout = replacement_timeout
After configuring /etc/iscsi/iscsid.conf, you must perform a re-discovery of the affected storage.
This will allow the system to load and use any new values in /etc/iscsi/iscsid.conf. For more
information on how to discover iSCSI devices, refer to Section 25.14, “Scanning iSCSI Interconnects”.
You can also configure timeouts for a specific session and make them non-persistent (instead of using
/etc/iscsi/iscsid.conf). To do so, run the following command (replace the variables accordingly):
IMPORTANT
The configuration described here is recommended for iSCSI sessions involving root
partition access. For iSCSI sessions involving access to other types of storage (namely, in
systems that use dm-multipath), refer to Section 25.18.2, “iSCSI Settings with dm-
multipath”.
236
CHAPTER 25. ONLINE STORAGE MANAGEMENT
The Linux SCSI layer sets a timer on each command. When this timer expires, the SCSI layer will
quiesce the host bus adapter (HBA) and wait for all outstanding commands to either time out or
complete. Afterwards, the SCSI layer will activate the driver's error handler.
When the error handler is triggered, it attempts the following operations in order (until one successfully
executes):
If all of these operations fail, the device will be set to the offline state. When this occurs, all I/O to that
device will be failed, until the problem is corrected and the user sets the device to running.
The process is different, however, if a device uses the Fibre Channel protocol and the rport is blocked.
In such cases, the drivers wait for several seconds for the rport to become online again before
activating the error handler. This prevents devices from becoming offline due to temporary transport
problems.
Device States
To display the state of a device, use:
$ cat /sys/block/device-name/device/state
Command Timer
To control the command timer, modify the /sys/block/device-name/device/timeout file:
Replace value in the command with the timeout value, in seconds, that you want to implement.
237
Storage Administration Guide
1. Determine which mpath link entries in /etc/lvm/cache/.cache are specific to the stale
logical unit. To do this, run the following command:
Using the same example in the previous step, the lines you need to delete are:
/dev/dm-4
/dev/dm-5
/dev/mapper/3600d0230003414f30000203a7bc41a00
/dev/mapper/3600d0230003414f30000203a7bc41a00p1
/dev/mpath/3600d0230003414f30000203a7bc41a00
/dev/mpath/3600d0230003414f30000203a7bc41a00p1
238
CHAPTER 25. ONLINE STORAGE MANAGEMENT
IMPORTANT
In most scenarios, you do not need to enable the eh_deadline parameter. Using the
eh_deadline parameter can be useful in certain specific scenarios, for example if a link
loss occurs between a Fibre Channel switch and a target port, and the Host Bus Adapter
(HBA) does not receive Registered State Change Notifications (RSCNs). In such a case,
I/O requests and error recovery commands all time out rather than encounter an error.
Setting eh_deadline in this environment puts an upper limit on the recovery time, which
enables the failed I/O to be retried on another available path by multipath.
However, if RSCNs are enabled, the HBA does not register the link becoming
unavailable, or both, the eh_deadline functionality provides no additional benefit, as the
I/O and error recovery commands fail immediately, which allows multipath to retry.
The SCSI host object eh_deadline parameter enables you to configure the maximum amount of time
that the SCSI error handling mechanism attempts to perform error recovery before stopping and resetting
the entire HBA.
The value of the eh_deadline is specified in seconds. The default setting is off, which disables the
time limit and allows all of the error recovery to take place. In addition to using sysfs, a default value
can be set for all SCSI HBAs by using the scsi_mod.eh_deadline kernel parameter.
Note that when eh_deadline expires, the HBA is reset, which affects all target paths on that HBA, not
only the failing one. As a consequence, I/O errors can occur if some of the redundant paths are not
available for other reasons. Enable eh_deadline only if you have a fully redundant multipath
configuration on all targets.
[5] The target_IP and port variables refer to the IP address and port combination of a target/portal, respectively. For
more information, refer to Section 25.6.1, “iSCSI API” and Section 25.14, “Scanning iSCSI Interconnects” .
[6] Refer to Section 25.14, “Scanning iSCSI Interconnects” for information on proper_target_name .
[7] For information on how to retrieve a session's SID value, refer to Section 25.6.1, “iSCSI API”.
[8] This is a single command split into multiple lines, to accommodate printed and PDF versions of this document.
All concatenated lines — preceded by the backslash (\) — should be treated as one command, sans backslashes.
[9] Prior to Red Hat Enterprise Linux 5.4, the default NOP-Out requests time out was 15 seconds.
239
Storage Administration Guide
Fibre Channel
iSCSI
NFS
GFS2
Virtualization in Red Hat Enterprise Linux 7 uses libvirt to manage virtual instances. The libvirt
utility uses the concept of storage pools to manage storage for virtualized guests. A storage pool is
storage that can be divided up into smaller volumes or allocated directly to a guest. Volumes of a storage
pool can be allocated to virtualized guests. There are two categories of storage pools available:
IMPORTANT
26.2. DM-MULTIPATH
Device Mapper Multipathing (DM-Multipath) is a feature that allows you to configure multiple I/O paths
between server nodes and storage arrays into a single device. These I/O paths are physical SAN
connections that can include separate cables, switches, and controllers. Multipathing aggregates the I/O
paths, creating a new device that consists of the aggregated paths.
Redundancy
DM-Multipath can provide failover in an active/passive configuration. In an active/passive
configuration, only half the paths are used at any time for I/O. If any element of an I/O path (the cable,
switch, or controller) fails, DM-Multipath switches to an alternate path.
240
CHAPTER 26. DEVICE MAPPER MULTIPATHING AND VIRTUAL STORAGE
Improved Performance
DM-Multipath can be configured in active/active mode, where I/O is spread over the paths in a round-
robin fashion. In some configurations, DM-Multipath can detect loading on the I/O paths and
dynamically re-balance the load.
IMPORTANT
241
Storage Administration Guide
This library is used as a building block for other higher level management tools and applications. End
system administrators can also use it as a tool to manually manage storage and automate storage
management tasks with the use of scripts.
Create and delete volumes, access groups, file systems, or NFS exports.
Server resources such as CPU and interconnect bandwidth are not utilized because the operations are
all done on the array.
A stable C and Python API for client application and plug-in developers.
242
CHAPTER 27. EXTERNAL ARRAY MANAGEMENT (LIBSTORAGEMGMT)
WARNING
This library and its associated tool have the ability to destroy any and all data
located on the arrays it manages. It is highly recommended to develop and test
applications and scripts against the storage simulator plug-in to remove any logic
errors before working with production systems. Testing applications and scripts on
actual non-production hardware before deploying to production is also strongly
encouraged if possible.
The libStorageMgmt library in Red Hat Enterprise Linux 7 adds a default udev rule to handle the
REPORTED LUNS DATA HAS CHANGED unit attention.
When a storage configuration change has taken place, one of several Unit Attention ASC/ASCQ codes
reports the change. A uevent is then generated and is rescanned automatically with sysfs.
The libStorageMgmt library uses a plug-in architecture to accommodate differences in storage arrays.
For more information on libStorageMgmt plug-ins and how to write them, refer to the Red Hat
Developer Guide.
Storage array
Any storage system that provides block access (FC, FCoE, iSCSI) or file access through Network
Attached Storage (NAS).
Volume
Storage Area Network (SAN) Storage Arrays can expose a volume to the Host Bus Adapter (HBA)
over different transports, such as FC, iSCSI, or FCoE. The host OS treats it as block devices. One
volume can be exposed to many disks if multipath[2] is enabled).
This is also known as the Logical Unit Number (LUN), StorageVolume with SNIA terminology, or
virtual disk.
Pool
A group of storage spaces. File systems or volumes can be created from a pool. Pools can be
created from disks, volumes, and other pools. A pool may also hold RAID settings or thin provisioning
settings.
Snapshot
A point in time, read only, space efficient copy of data.
243
Storage Administration Guide
Clone
A point in time, read writeable, space efficient copy of data.
Copy
A full bitwise copy of the data. It occupies the full space.
Mirror
A continuously updated copy (synchronous and asynchronous).
Access group
Collections of iSCSI, FC, and FCoE initiators which are granted access to one or more storage
volumes. This ensures that only storage volumes are accessibly by the specified initiators.
Access Grant
Exposing a volume to a specified access group or initiator. The libStorageMgmt library currently
does not support LUN mapping with the ability to choose a specific logical unit number. The
libStorageMgmt library allows the storage array to select the next available LUN for assignment. If
configuring a boot from SAN or masking more than 256 volumes be sure to read the OS, Storage
Array, or HBA documents.
System
Represents a storage array or a direct attached storage RAID.
File system
A Network Attached Storage (NAS) storage array can expose a file system to host an OS through an
IP network, using either NFS or CIFS protocol. The host OS treats it as a mount point or a folder
containing files depending on the client operating system.
Disk
The physical disk holding the data. This is normally used when creating a pool with RAID settings.
Initiator
In Fibre Channel (FC) or Fibre Channel over Ethernet (FCoE), the intiator is the World Wide Port
Name (WWPN) or World Wide Node Name (WWNN). In iSCSI, the initiator is the iSCSI Qualified
Name (IQN). In NFS or CIFS, the initiator is the host name or the IP address of the host.
Child dependency
Some arrays have an implicit relationship between the origin (parent volume or file system) and the
child (such as a snapshot or a clone). For example, it is impossible to delete the parent if it has one
or more depend children. The API provides methods to determine if any such relationship exists and
244
CHAPTER 27. EXTERNAL ARRAY MANAGEMENT (LIBSTORAGEMGMT)
To develop C applications that utilize the library, install the libstoragemgmt-devel package with the
following command:
To install libStorageMgmt for use with hardware arrays, select one or more of the appropriate plug-in
packages with the following command:
libstoragemgmt-smis-plugin
Generic SMI-S array support.
libstoragemgmt-netapp-plugin
Specific support for NetApp files.
libstoragemgmt-nstor-plugin
Specific support for NexentaStor.
libstoragemgmt-targetd-plugin
Specific support for targetd.
The daemon is then installed and configured to run at start up but will not do so until the next reboot. To
use it immediately without rebooting, start the daemon manually.
To manage an array requires support through a plug-in. The base install package includes open source
plug-ins for a number of different vendors. Additional plug-in packages will be available separately as
array support improves. Currently supported hardware is constantly changing and improving.
The libStorageMgmt daemon (lsmd) behaves like any standard service for the system.
245
Storage Administration Guide
A Uniform Resource Identifier (URI) which is used to identify the plug-in to connect to the array
and any configurable options the array requires.
plugin+optional-transport://user-name@host:port/?query-string-parameters
$ lsmcli -u sim://...
3. Place the URI in the file ~/.lsmcli, which contains name-value pairs separated by "=". The
only currently supported configuration is 'uri'.
Determining which URI to use needs to be done in this order. If all three are supplied, only the first one
on the command line will be used.
Supply the password by specifying the -P option on the command line or by placing it in an
environmental variable LSMCLI_PASSWORD.
246
CHAPTER 27. EXTERNAL ARRAY MANAGEMENT (LIBSTORAGEMGMT)
An example for using the command line to create a new volume and making it visible to an initiator.
Create a volume.
247
Storage Administration Guide
---------------------------------+------------+---------------------
-------------+-----------
782d00c8ac63819d6cca7069282e03a0 | example_ag | iqn.1994-
05.com.domain:01.89bd01 | sim-01
The design of the library provides for a process separation between the client and the plug-in by means
of inter-process communication (IPC). This prevents bugs in the plug-in from crashing the client
application. It also provides a means for plug-in writers to write plug-ins with a license of their own
choosing. When a client opens the library passing a URI, the client library looks at the URI to determine
which plug-in should be used.
The plug-ins are technically stand alone applications but they are designed to have a file descriptor
passed to them on the command line. The client library then opens the appropriate Unix domain socket
which causes the daemon to fork and execute the plug-in. This gives the client library a point to point
communcation channel with the plug-in. The daemon can be restarted without affecting existing clients.
While the client has the library open for that plug-in, the plug-in process is running. After one or more
commands are sent and the plug-in is closed, the plug-in process cleans up and then exits.
The default behavior of lsmcli is to wait until the operation is completee. Depending on the requested
operations, this could potentially could take many hours. To allow a return to normal usage, it is possible
to use the -b option on the command line. If the exit code is 0 the command is completed. If the exit
code is 7 the command is in progress and a job identifier is written to standard output. The user or script
can then take the job ID and query the status of the command as needed by using lsmcli --
jobstatus JobID. If the job is now completed, the exit value will be 0 and the results printed to
standard output. If the command is still in progress, the return value will be 7 and the percentage
complete will be printed to the standard output.
Create a volume passing the -b option so that the command returns immediately.
Check to see what the exit value was, remembering that 7 indicates the job is still in progress.
$ echo $?
7
Check to see what the exit value was, remembering that 7 indicates the job is still in progress so the
standard output is the percentage done or 33% based on the above screen.
$ echo $?
248
CHAPTER 27. EXTERNAL ARRAY MANAGEMENT (LIBSTORAGEMGMT)
Wait some more and check it again, remembering that exit 0 means success and standard out
displays the new volume.
For scripting, pass the -t SeparatorCharacters option. This will make it easier to parse the output.
249
Storage Administration Guide
For more information on lsmcli, refer to the man pages or the command lsmcli --help.
250
CHAPTER 28. PERSISTENT MEMORY: NVDIMMS
Persistent memory is byte-addressable, so it can be accessed by using CPU load and store
instructions. In addition to read() or write() system calls that are required for accessing
traditional block-based storage, pmem also supports direct load and store programming model.
The performance characteristics of persistent memory are similar to DRAM with very low access
latency, typically in the tens to hundreds of nanoseconds.
Contents of persistent memory are preserved when the power is off, like with storage.
Persistent memory allows an application to keep the warm cache across reboots if the application is
designed properly. In this instance, there would be no page cache involved: the application would
cache data directly in the persistent memory.
Fast write-cache
File servers often do not acknowledge a client's write request until the data is on durable media. Using
persistent memory as a fast write cache enables a file server to acknowledge the write request
quickly thanks to the low latency of pmem.
NVDIMMs Interleaving
Non-Volatile Dual In-line Memory Modules (NVDIMMs) can be grouped into interleave sets in the same
way as regular DRAM. An interleave set is like a RAID 0 (stripe) across multiple DIMMs.
Like DRAM, NVDIMMs benefit from increased performance when they are configured into
interleave sets.
It can be used to combine multiple smaller NVDIMMs into one larger logical device.
If your NVDIMMs support labels, the region device can be further subdivided into namespaces.
If your NVDIMMs do not support labels, the region devices can only contain a single namespace.
In this case, the kernel creates a default namespace which covers the entire region.
251
Storage Administration Guide
You can use persistent memory in sector, memory, dax (Direct Access) or raw mode:
sector mode
It presents the storage as a fast block device. Using sector mode is useful for legacy applications that
have not been modified to use persistent memory, or for applications that make use of the full I/O
stack, including the Device Mapper.
memory mode
It enables persistent memory devices to support direct access programming as described in the
Storage Networking Industry Association (SNIA) Non-Volatile Memory (NVM) Programming Model
specification. In memory mode, I/O bypasses the storage stack of the kernel, and many Device
Mapper drivers therefore cannot be used.
dax mode
The dax mode,also called device DAX, provides raw access to persistent memory by using a DAX
character device node. Data on a DAX device can be made durable using CPU cache flushing and
fencing instructions. Certain databases and virtual machine hypervisors might benefit from DAX
mode. File systems cannot be created on device dax instances.
raw mode
The raw mode namespaces have several limitations and should not be used.
Procedure 28.1. Configuring Persistent Memory for device that does not support labels
1. List the available pmem regions on your system. In the following example, the command lists an
NVDIMM-N device that does not support labels:
252
CHAPTER 28. PERSISTENT MEMORY: NVDIMMS
OS creates a default namespace for each region because the NVDIMM-N device here does not
support labels. Hence, the available size is 0 bytes.
3. Reconfigure the inactive namespaces in order to make use of this space. For example, to use
namespace0.0 for a file system that supports DAX, use the following command:
Procedure 28.2. Configuring Persistent Memory for device that support labels
1. List the available pmem regions on your system. In the following example, the command lists an
NVDIMM-N device that support labels:
253
Storage Administration Guide
"type":"pmem",
"iset_id":-137289417188962304
}
]
2. If an NVDIMM support labels, default namespaces are not created, and you can allocate one or
more namespaces from a region without using the --force or --reconfigure flags:
Now, you can create another namespace from the same region:
You can also create namespaces of different types from the same region, using the following
command:
254
CHAPTER 28. PERSISTENT MEMORY: NVDIMMS
In the example, namespace1.0 is reconfigured to sector mode. Note that the block device name
changed from pmem1 to pmem1s. This device can be used in the same way as any other block device on
the system. For example, the device can be partitioned, you can create a file system on the device, the
device can be configured as part of a software RAID set, and the device can be the cache device for dm-
cache.
In the example, namespace0.0 is converted to namespace memory mode. With the --map=mem
argument, ndctl puts operating system data structures used for Direct Memory Access (DMA) in system
DRAM.
To perform DMA, the kernel requires a data structure for each page in the memory region. The overhead
of this data structure is 64 bytes per 4-KiB page. For small devices, the amount of overhead is small
enough to fit in DRAM with no problems. For example, the 16-GiB namespace only requires 256MiB for
page structures. Because the NVDIMM is small and expensive, storing the kernel’s page tracking data
structures in DRAM is preferable, as indicated by the --map=mem parameter.
255
Storage Administration Guide
In the future, NVDIMM devices might be terabytes in size. For such devices, the amount of memory
required to store the page tracking data structures might exceed the amount of DRAM in the system. One
TiB of persistent memory requires 16 GiB just for page structures. As a result, specifying the --map=dev
parameter to store the data structures in the persistent memory itself is preferable in such cases.
After configuring the namespace in memory mode, the namespace is ready for a file system. Starting
with Red Hat Enterprise Linux 7.3, both the Ext4 and XFS file system enable using persistent memory as
a Technology Preview. File system creation requires no special arguments. To get the DAX functionality,
mount the file system with the dax mount option. For example:
Then, applications can use persistent memory and create files in the /mnt/pmem/ directory, open the
files, and use the mmap operation to map the files for direct access.
When creating partitions on a pmem device to be used for direct access, partitions must be aligned on
page boundaries. On the Intel 64 and AMD64 architecture, at least 4KiB alignment for the start and end
of the partition, but 2MiB is the preferred alignment. By default, the parted tool aligns partitions on 1MiB
boundaries. For the first partition, specify 2MiB as the start of the partition. If the size of the partition is a
multiple of 2MiB, all other partitions are also aligned.
The given command ensures that the operating system would fault in 2MiB pages at a time. For the Intel
64 and AMD64 architecture, the following fault granularities are supported:
4KiB
2MiB
1GiB
Device DAX nodes (/dev/daxN.M) only supports the following system call:
open()
close()
mmap()
fallocate()
read() and write() variants are not supported because the use case is tied to persistent memory
programming.
256
CHAPTER 28. PERSISTENT MEMORY: NVDIMMS
28.5. TROUBLESHOOTING
Some NVDIMMs support Self-Monitoring, Analysis and Reporting Technology (S.M.A.R.T.) interfaces for
retrieving health information.
NOTE
On some systems, the acpi_ipmi driver must be loaded to retrieve health information
using the following command:
# modprobe acpi_ipmi
257
Storage Administration Guide
258
CHAPTER 29. VDO INTEGRATION
Instead of writing the same data more than once, VDO detects each duplicate block and records
it as a reference to the original block. VDO maintains a mapping from logical block addresses,
which are used by the storage layer above VDO, to physical block addresses, which are used
by the storage layer under VDO.
After deduplication, multiple logical block addresses may be mapped to the same physical block
address; these are called shared blocks. Block sharing is invisible to users of the storage, who
read and write blocks as they would if VDO were not present. When a shared block is
overwritten, a new physical block is allocated for storing the new block data to ensure that other
logical block addresses that are mapped to the shared physical block are not modified.
Compression is a data-reduction technique that works well with file formats that do not
necessarily exhibit block-level redundancy, such as log files and databases. See Section 29.4.8,
“Using Compression” for more detail.
kvdo
A kernel module that loads into the Linux Device Mapper layer to provide a deduplicated,
compressed, and thinly provisioned block storage volume
uds
A kernel module that communicates with the Universal Deduplication Service (UDS) index on the
volume and analyzes data for duplicates.
The UDS index provides the foundation of the VDO product. For each new piece of data, it quickly
determines if that piece is identical to any previously stored piece of data. If the index finds match, the
storage system can then internally reference the existing item to avoid storing the same information more
than once.
The UDS index runs inside the kernel as the uds kernel module.
The kvdo Linux kernel module provides block-layer deduplication services within the Linux Device
Mapper layer. In the Linux kernel, Device Mapper serves as a generic framework for managing pools of
block storage, allowing the insertion of block-processing modules into the storage stack between the
259
Storage Administration Guide
The kvdo module is exposed as a block device that can be accessed directly for block storage or
presented through one of the many available Linux file systems, such as XFS or ext4. When kvdo
receives a request to read a (logical) block of data from a VDO volume, it maps the requested logical
block to the underlying physical block and then reads and returns the requested data.
When kvdo receives a request to write a block of data to a VDO volume, it first checks whether it is a
DISCARD or TRIM request or whether the data is uniformly zero. If either of these conditions holds, kvdo
updates its block map and acknowledges the request. Otherwise, a physical block is allocated for use by
the request.
1. It temporarily writes the data in the request to the allocated block and then acknowledges the
request.
3. If the VDO index contains an entry for a block with the same signature, kvdo reads the indicated
block and does a byte-by-byte comparison of the two blocks to verify that they are identical.
4. If they are indeed identical, then kvdo updates its block map so that the logical block points to
the corresponding physical block and releases the allocated physical block.
5. If the VDO index did not contain an entry for the signature of the block being written, or the
indicated block does not actually contain the same data, kvdo updates its block map to make the
temporary physical block permanent.
2. It will then attempt to deduplicate the block in same manner as described above.
3. If the block turns out to be a duplicate, kvdo will update its block map and release the allocated
block. Otherwise, it will write the data in the request to the allocated block and update the block
map to make the physical block permanent.
260
CHAPTER 29. VDO INTEGRATION
Slabs
The physical storage of the VDO volume is divided into a number of slabs, each of which is a contiguous
region of the physical space. All of the slabs for a given volume will be of the same size, which may be
any power of 2 multiple of 128 MB up to 32 GB.
The default slab size is 2 GB in order to facilitate evaluating VDO on smaller test systems. A single VDO
volume may have up to 8096 slabs. Therefore, in the default configuration with 2 GB slabs, the
maximum allowed physical storage is 16 TB. When using 32 GB slabs, the maximum allowed physical
storage is 256 TB.
For a recommendation on what slab size to choose depending on your physical volume size, see
Table 29.1, “Recommended VDO Slab Sizes by Physical Volume Size”.
At least one entire slab will be reserved by VDO for metadata, and therefore cannot be used for storing
user data.
The size of a slab can be controlled by providing the --vdoSlabSize=megabytes option to the vdo
create command.
Slab Size 1 GB 2 GB 32 GB 32 GB 32 GB 32 GB
Both physical size and available physical size describe the amount of disk space on the block device
that VDO can utilize:
Physical size is the same size as the underlying block device. VDO uses this storage for:
Available physical size is the portion of the physical size that VDO is able to use for user data.
It is equivalent to the physical size minus the size of the metadata, minus the remainder after
dividing the volume into slabs by the given slab size.
261
Storage Administration Guide
For examples of how much storage VDO metadata require on block devices of different sizes, see
Section 29.2.3, “Examples of VDO System Requirements by Physical Volume Size”.
Logical Size
If the --vdoLogicalSize option is not specified, the logical volume size defaults to the available
physical volume size. Note that, in Figure 29.1, “VDO Disk Organization”, the VDO deduplicated storage
target sits completely on top of the block device, meaning the physical size of the VDO volume is the
same size as the underlying block device.
VDO currently supports any logical size up to 254 times the size of the physical volume with an absolute
maximum logical size of 4PB.
vdo
Creates, configures, and controls VDO volumes
vdostats
Provides utilization and performance statistics
RAM
Each VDO volume has two distinct memory requirements:
The VDO module requires 370 MB plus an additional 268 MB per each 1 TB of physical storage
managed.
The Universal Deduplication Service (UDS) index requires a minimum of 250 MB of DRAM,
which is also the default amount that deduplication uses. For details on the memory usage of
UDS, see Section 29.2.1, “UDS Index Memory Requirements”.
Storage
A single VDO volume can be configured to use up to 256 TB of physical storage. See Section 29.2.2,
“VDO Storage Requirements” for the calculations to determine the usable size of a VDO-managed
volume from the physical size of the storage pool the VDO is given.
LVM
Python 2.7
The yum package manager will install all necessary software dependencies automatically.
262
CHAPTER 29. VDO INTEGRATION
On top of VDO: LVM cache, LVM Logical Volumes, LVM snapshots, and LVM Thin Provisioning.
IMPORTANT
VDO supports two write modes: sync and async. When VDO is in sync mode, writes to
the VDO device are acknowledged when the underlying storage has written the data
permanently. When VDO is in async mode, writes are acknowledged before being
written to persistent storage.
It is critical to set the VDO write policy to match the behavior of the underlying storage. By
default, VDO write policy is set to the auto option, which selects the appropriate policy
automatically.
For more information, see Section 29.4.2, “Selecting VDO Write Modes”.
A compact representation is used in memory that contains at most one entry per unique block.
An on-disk component which records the associated block names presented to the index as they
occur, in order.
The on-disk component maintains a bounded history of data passed to UDS. UDS provides deduplication
advice for data that falls within this deduplication window, containing the names of the most recently
seen blocks. The deduplication window allows UDS to index data as efficiently as possible while limiting
the amount of memory required to index large data repositories. Despite the bounded nature of the
deduplication window, most datasets which have high levels of deduplication also exhibit a high degree
of temporal locality — in other words, most deduplication occurs among sets of blocks that were written
at about the same time. Furthermore, in general, data being written is more likely to duplicate data that
263
Storage Administration Guide
was recently written than data that was written a long time ago. Therefore, for a given workload over a
given time interval, deduplication rates will often be the same whether UDS indexes only the most recent
data or all the data.
Because duplicate data tends to exhibit temporal locality, it is rarely necessary to index every block in
the storage system. Were this not so, the cost of index memory would outstrip the savings of reduced
storage costs from deduplication. Index size requirements are more closely related to the rate of data
ingestion. For example, consider a storage system with 100 TB of total capacity but with an ingestion
rate of 1 TB per week. With a deduplication window of 4 TB, UDS can detect most redundancy among
the data written within the last month.
UDS's Sparse Indexing feature (the recommended mode for VDO) further exploits temporal locality by
attempting to retain only the most relevant index entries in memory. UDS can maintain a deduplication
window that is ten times larger while using the same amount of memory. While the sparse index provides
the greatest coverage, the dense index provides more advice. For most workloads, given the same
amount of memory, the difference in deduplication rates between dense and sparse indexes is
negligible.
The memory required for the index is determined by the desired size of the deduplication window:
For a dense index, UDS will provide a deduplication window of 1 TB per 1 GB of RAM. A 1 GB
index is generally sufficient for storage systems of up to 4 TB.
For a sparse index, UDS will provide a deduplication window of 10 TB per 1 GB of RAM. A 1 GB
sparse index is generally sufficient for up to 40 TB of physical storage.
For concrete examples of UDS Index memory requirements, see Section 29.2.3, “Examples of VDO
System Requirements by Physical Volume Size”
The first type scales with the physical size of the VDO volume and uses approximately 1 MB
for each 4 GB of physical storage plus an additional 1 MB per slab.
The second type scales with the logical size of the VDO volume and consumes
approximately 1.25 MB for each 1 GB of logical storage, rounded up to the nearest slab.
The UDS index is stored within the VDO volume group and is managed by the associated VDO
instance. The amount of storage required depends on the type of index and the amount of RAM
allocated to the index. For each 1 GB of RAM, a dense UDS index will use 17 GB of storage,
and a sparse UDS index will use 170 GB of storage.
For concrete examples of VDO storage requirements, see Section 29.2.3, “Examples of VDO System
Requirements by Physical Volume Size”
264
CHAPTER 29. VDO INTEGRATION
In the primary storage case, the UDS index is between 0.01% to 25% the size of the physical volume.
Table 29.2. VDO Storage and Memory Requirements for Primary Storage
Sparse:
250 MB
Sparse: 22 GB
In the backup storage case, the UDS index covers the size of the backup set but is not bigger than the
physical volume. If you expect the backup set or the physical size to grow in the future, factor this into
the index size.
Table 29.3. VDO Storage and Memory Requirements for Backup Storage
29.3.1. Introduction
Virtual Data Optimizer (VDO) provides inline data reduction for Linux in the form of deduplication,
compression, and thin provisioning. When you set up a VDO volume, you specify a block device on
which to construct your VDO volume and the amount of logical storage you plan to present.
When hosting active VMs or containers, Red Hat recommends provisioning storage at a 10:1
logical to physical ratio: that is, if you are utilizing 1 TB of physical storage, you would present it
as 10 TB of logical storage.
265
Storage Administration Guide
For object storage, such as the type provided by Ceph, Red Hat recommends using a 3:1 logical
to physical ratio: that is, 1 TB of physical storage would present as 3 TB logical storage.
In either case, you can simply put a file system on top of the logical device presented by VDO and then
use it directly or as part of a distributed cloud storage architecture.
the direct-attached use case for virtualization servers, such as those built using Red Hat
Virtualization, and
the cloud storage use case for object-based distributed storage clusters, such as those built
using Ceph Storage.
NOTE
This chapter provides examples for configuring VDO for use with a standard Linux file system that can
be easily deployed for either use case; see the diagrams in Section 29.3.5, “Deployment Examples”.
vdo
kmod-kvdo
To install VDO, use the yum package manager to install the RPM packages:
In all the following steps, replace vdo_name with the identifier you want to use for your VDO volume; for
example, vdo1.
NOTE
Before creating volumes, VDO uses LVM utilities such as, pvcreate --test to validate
block device.
# vdo create \
--name=vdo_name \
--device=block_device \
--vdoLogicalSize=logical_size
266
CHAPTER 29. VDO INTEGRATION
Replace block_device with the persistent name of the block device where you want to create
the VDO volume. For example, /dev/disk/by-id/scsi-
3600508b1001c264ad2af21e903ad031f.
IMPORTANT
Use a persistent device name. If you use a non-persistent device name, then
VDO might fail to start properly in the future if the device name changes.
Replace logical_size with the amount of logical storage that the VDO volume should present:
For active VMs or container storage, use logical size that is ten times the physical size
of your block device. For example, if your block device is 1 TB in size, use 10T here.
For object storage, use logical size that is three times the physical size of your block
device. For example, if your block device is 1 TB in size, use 3T here.
For example, to create a VDO volume for container storage on a 1 TB block device, you
might use:
# vdo create \
--name=vdo1 \
--device=/dev/disk/by-id/scsi-
3600508b1001c264ad2af21e903ad031f \
--vdoLogicalSize=10T
When a VDO volume is created, VDO adds an entry to the /etc/vdoconf.yml configuration
file. The vdo.service systemd unit then uses the entry to start the volume by default.
IMPORTANT
If a failure occurs when creating the VDO volume, remove the volume to clean up.
See Section 29.4.3.1, “Removing an Unsuccessfully Created Volume” for details.
# mkfs.xfs -K /dev/mapper/vdo_name
267
Storage Administration Guide
4. To configure the file system to mount automatically, use either the /etc/fstab file or a
systemd mount unit:
If you decide to use the /etc/fstab configuration file, add one of the following lines to the
file:
Alternatively, if you decide to use a systemd unit, create a systemd mount unit file with the
appropriate filename. For the mount point of your VDO volume, create the
/etc/systemd/system/mnt-vdo_name.mount file with the following content:
[Unit]
Description = VDO unit file to mount file system
name = vdo_name.mount
Requires = vdo.service
After = multi-user.target
Conflicts = umount.target
[Mount]
What = /dev/mapper/vdo_name
Where = /mnt/vdo_name
Type = xfs
[Install]
WantedBy = multi-user.target
5. Enable the discard feature for the file system on your VDO device. Both batch and online
operations work with VDO.
For information on how to set up the discard feature, see Section 2.4, “Discard Unused
Blocks”.
268
CHAPTER 29. VDO INTEGRATION
VDO space usage and efficiency can be monitored using the vdostats utility:
# vdostats --human-readable
To see how VDO can be deployed successfully on a KVM server configured with Direct Attached
Storage, see Figure 29.2, “VDO Deployment with KVM”.
For more information on VDO deployment, see Section 29.5, “Deployment Scenarios”.
The VDO systemd unit is installed and enabled by default when the vdo package is installed. This unit
automatically runs the vdo start --all command at system startup to bring up all activated VDO
volumes. See Section 29.4.6, “Automatically Starting VDO Volumes at System Boot” for more
269
Storage Administration Guide
information.
To stop a given VDO volume, or all VDO volumes, and the associated UDS index(es), use one of these
commands:
If restarted after an unclean shutdown, VDO will perform a rebuild to verify the consistency of its
metadata and will repair it if necessary. Rebuilds are automatic and do not require user intervention. See
Section 29.4.5, “Recovering a VDO Volume After an Unclean Shutdown” for more information on the
rebuild process.
In synchronous mode, all writes that were acknowledged by VDO prior to the shutdown will be
rebuilt.
In asynchronous mode, all writes that were acknowledged prior to the last acknowledged flush
request will be rebuilt.
In either mode, some writes that were either unacknowledged or not followed by a flush may also be
rebuilt.
For details on VDO write modes, see Section 29.4.2, “Selecting VDO Write Modes”.
When VDO is in sync mode, the layers above it assume that a write command writes data to
persistent storage. As a result, it is not necessary for the file system or application, for example,
to issue FLUSH or Force Unit Access (FUA) requests to cause the data to become persistent at
critical points.
VDO must be set to sync mode only when the underlying storage guarantees that data is written
to persistent storage when the write command completes. That is, the storage must either have
no volatile write cache, or have a write through cache.
When VDO is in async mode, the data is not guaranteed to be written to persistent storage
when a write command is acknowledged. The file system or application must issue FLUSH or
FUA requests to ensure data persistence at critical points in each transaction.
VDO must be set to async mode if the underlying storage does not guarantee that data is
written to persistent storage when the write command completes; that is, when the storage has a
volatile write back cache.
For information on how to find out if a device uses volatile cache or not, see the section called
“Checking for a Volatile Cache”.
The auto mode automatically selects sync or async based on the characteristics of each
device. This is the default option.
For a more detailed theoretical overview of how write policies operate, see the section called “Overview
of VDO Write Policies”.
270
CHAPTER 29. VDO INTEGRATION
To set a write policy, use the --writePolicy option. This can be specified either when creating a VDO
volume as in Section 29.3.3, “Creating a VDO Volume” or when modifying an existing VDO volume with
the changeWritePolicy subcommand:
IMPORTANT
Using the incorrect write policy might result in data loss on power failure.
$ cat '/sys/block/sda/device/scsi_disk/7:0:0:0/cache_type'
write back
$ cat '/sys/block/sdb/device/scsi_disk/1:2:0:0/cache_type'
None
Additionally, in the kernel boot log, you can find whether the above mentioned devices have a write
cache or not:
See the Viewing and Managing Log Files chapter in the System Administrator's Guide for more
information on reading the system log.
NOTE
You should configure VDO to use the sync write policy if the cache_type value is none
or write through.
271
Storage Administration Guide
Prior to removing a VDO volume, unmount file systems and stop applications that are using the storage.
The vdo remove command removes the VDO volume and its associated UDS index, as well as logical
volumes where they reside.
If a failure occurs when the vdo utility is creating a VDO volume, the volume is left in an intermediate
state. This might happen when, for example, the system crashes, power fails, or the administrator
interrupts a running vdo create command.
To clean up from this situation, remove the unsuccessfully created volume with the --force option:
The --force option is required because the administrator might have caused a conflict by changing the
system configuration since the volume was unsuccessfully created. Without the --force option, the
vdo remove command fails with the following message:
[...]
A previous operation failed.
Recovery from the failure either failed or was interrupted.
Add '--force' to 'remove' to perform the following cleanup.
Steps to clean up VDO my_vdo:
umount -f /dev/mapper/my_vdo
udevadm settle
dmsetup remove my_vdo
vdo: ERROR - VDO volume my_vdo previous operation (create) is incomplete
In general, Red Hat recommends using a sparse UDS index for all production use cases. This is an
extremely efficient indexing data structure, requiring approximately one-tenth of a byte of DRAM per
block in its deduplication window. On disk, it requires approximately 72 bytes of disk space per block.
The minimum configuration of this index uses 256 MB of DRAM and approximately 25 GB of space on
disk. To use this configuration, specify the --sparseIndex=enabled --indexMem=0.25 options to
the vdo create command. This configuration results in a deduplication window of 2.5 TB (meaning it
will remember a history of 2.5 TB). For most use cases, a deduplication window of 2.5 TB is appropriate
for deduplicating storage pools that are up to 10 TB in size.
The default configuration of the index, however, is to use a dense index. This index is considerably less
efficient (by a factor of 10) in DRAM, but it has much lower (also by a factor of 10) minimum required
disk space, making it more convenient for evaluation in constrained environments.
272
CHAPTER 29. VDO INTEGRATION
In general, a deduplication window which is one quarter of the physical size of a VDO volume is a
recommended configuration. However, this is not an actual requirement. Even small deduplication
windows (compared to the amount of physical storage) can find significant amounts of duplicate data in
many use cases. Larger windows may also be used, but it in most cases, there will be little additional
benefit to doing so.
Speak with your Red Hat Technical Account Manager representative for additional guidelines on tuning
this important system parameter.
If VDO was running on synchronous storage and write policy was set to sync, then all data
written to the volume will be fully recovered.
If the write policy was async, then some writes may not be recovered if they were not made
durable by sending VDO a FLUSH command, or a write I/O tagged with the FUA flag (force unit
access). This is accomplished from user mode by invoking a data integrity operation like fsync,
fdatasync, sync, or umount.
In the majority of cases, most of the work of rebuilding an unclean VDO volume can be done after the
VDO volume has come back online and while it is servicing read and write requests. Initially, the amount
of space available for write requests may be limited. As more of the volume's metadata is recovered,
more free space may become available. Furthermore, data written while the VDO is recovering may fail
to deduplicate against data written before the crash if that data is in a portion of the volume which has not
yet been recovered. Data may be compressed while the volume is being recovered. Previously
compressed blocks may still be read or overwritten.
During an online recovery, a number of statistics will be unavailable: for example, blocks in use and
blocks free. These statistics will become available once the rebuild is complete.
VDO can recover from most hardware and software errors. If a VDO volume cannot be recovered
successfully, it is placed in a read-only mode that persists across volume restarts. Once a volume is in
read-only mode, there is no guarantee that data has not been lost or corrupted. In such cases, Red Hat
recommends copying the data out of the read-only volume and possibly restoring the volume from
backup. (The operating mode attribute of vdostats indicates whether a VDO volume is in read-only
mode.)
If the risk of data corruption is acceptable, it is possible to force an offline rebuild of the VDO volume
metadata so the volume can be brought back online and made available. Again, the integrity of the rebuilt
data cannot be guaranteed.
To force a rebuild of a read-only VDO volume, first stop the volume if it is running:
273
Storage Administration Guide
To prevent certain existing volumes from being started automatically, deactivate those volumes by
running either of these commands:
You can also create a VDO volume that does not start automatically by adding the --
activate=disabled option to the vdo create command.
For systems that place LVM volumes on top of VDO volumes as well as beneath them (for example,
Figure 29.5, “Deduplicated Unified Storage”), it is vital to start services in the right order:
1. The lower layer of LVM must be started first (in most systems, starting this layer is configured
automatically when the LVM2 package is installed).
3. Finally, additional scripts must be run in order to start LVM volumes or other services on top of
the now running VDO volumes.
274
CHAPTER 29. VDO INTEGRATION
This stops the associated UDS index and informs the VDO volume that deduplication is no
longer active.
This restarts the associated UDS index and informs the VDO volume that deduplication is active
again.
You can also disable deduplication when creating a new VDO volume by adding the --
deduplication=disabled option to the vdo create command.
29.4.8.1. Introduction
In addition to block-level deduplication, VDO also provides inline block-level compression using the
HIOPS Compression™ technology. While deduplication is the optimal solution for virtual machine
environments and backup applications, compression works very well with structured and unstructured file
formats that do not typically exhibit block-level redundancy, such as log files and databases.
Compression operates on blocks that have not been identified as duplicates. When unique data is seen
for the first time, it is compressed. Subsequent copies of data that have already been stored are
deduplicated without requiring an additional compression step. The compression feature is based on a
parallelized packaging algorithm that enables it to handle many compression operations at once. After
first storing the block and responding to the requestor, a best-fit packing algorithm finds multiple blocks
that, when compressed, can fit into a single physical block. After it is determined that a particular
physical block is unlikely to hold additional compressed blocks, it is written to storage and the
uncompressed blocks are freed and reused. By performing the compression and packaging operations
after having already responded to the requestor, using compression imposes a minimal latency penalty.
When creating a volume, you can disable compression by adding the --compression=disabled
option to the vdo create command.
275
Storage Administration Guide
Whenever the number of logical blocks (virtual storage) exceeds the number of physical blocks (actual
storage), it becomes possible for file systems and applications to unexpectedly run out of space. For that
reason, storage systems using VDO must provide storage administrators with a way of monitoring the
size of the VDO's free pool. The size of this free pool may be determined by using the vdostats utility;
see Section 29.7.2, “vdostats” for details. The default output of this utility lists information for all running
VDO volumes in a format similar to the Linux df utility. For example:
If the size of VDO's free pool drops below a certain level, the storage administrator can take action by
deleting data (which will reclaim space whenever the deleted data is not duplicated), adding physical
storage, or even deleting LUNs.
VDO cannot reclaim space unless file systems communicate that blocks are free using DISCARD, TRIM,
or UNMAP commands. For file systems that do not use DISCARD, TRIM, or UNMAP, free space may be
manually reclaimed by storing a file consisting of binary zeros and then deleting that file.
File systems may generally be configured to issue DISCARD requests in one of two ways:
For file systems that support online discard, you can enable it by setting the discard option at mount
time.
Batch discard
Batch discard is a user-initiated operation that causes the file system to notify the block layer (VDO)
of any unused blocks. This is accomplished by sending the file system an ioctl request called
FITRIM.
You can use the fstrim utility (for example from cron) to send this ioctl to the file system.
For more information on the discard feature, see Section 2.4, “Discard Unused Blocks”.
It is also possible to manage free space when the storage is being used as a block storage target without
a file system. For example, a single VDO volume can be carved up into multiple subvolumes by
installing the Logical Volume Manager (LVM) on top of it. Before deprovisioning a volume, the
blkdiscard command can be used in order to free the space previously used by that logical volume.
276
CHAPTER 29. VDO INTEGRATION
LVM supports the REQ_DISCARD command and will forward the requests to VDO at the appropriate
logical block addresses in order to free the space. If other volume managers are being used, they would
also need to support REQ_DISCARD, or equivalently, UNMAP for SCSI devices or TRIM for ATA devices.
VDO volumes (or portions of volumes) can also be provisioned to hosts on a Fibre Channel storage
fabric or an Ethernet network using SCSI target frameworks such as LIO or SCST. SCSI initiators can
use the UNMAP command to free space on thinly provisioned storage targets, but the SCSI target
framework will need to be configured to advertise support for this command. This is typically done by
enabling thin provisioning on these volumes. Support for UNMAP can be verified on Linux-based SCSI
initiators by running the following command:
In the output, verify that the "Maximum unmap LBA count" value is greater than zero.
The use of this command allows storage administrators to initially create VDO volumes which have a
logical size small enough to be safe from running out of space. After some period of time, the actual rate
of data reduction can be evaluated, and if sufficient, the logical size of the VDO volume can be grown to
take advantage of the space savings.
The exact procedure depends on the type of the device. For example, to resize an MBR
partition, use the fdisk utility as described in Section 13.5, “Resizing a Partition with fdisk”.
2. Use the growPhysical option to add the new physical storage space to the VDO volume:
277
Storage Administration Guide
29.5.3. LVM
More feature-rich systems may make further use of LVM to provide multiple LUNs that are all backed by
the same deduplicated storage pool. In Figure 29.5, “Deduplicated Unified Storage”, the VDO target is
registered as a physical volume so that it can be managed by LVM. Multiple logical volumes (LV1 to
LV4) are created out of the deduplicated storage pool. In this way, VDO can support multiprotocol unified
block/file access to the underlying deduplicated storage pool.
278
CHAPTER 29. VDO INTEGRATION
Deduplicated unified storage design allows for multiple file systems to collectively use the same
deduplication domain through the LVM tools. Also, file systems can take advantage of LVM snapshot,
copy-on-write, and shrink or grow features, all on top of VDO.
29.5.4. Encryption
Data security is critical today. More and more companies have internal policies regarding data
encryption. Linux Device Mapper mechanisms such as DM-Crypt are compatible with VDO. Encrypting
VDO volumes will help ensure data security, and any file systems above VDO still gain the deduplication
feature for disk optimization. Note that applying encryption above VDO results in little if any data
deduplication; encryption renders duplicate blocks different before VDO can deduplicate them.
279
Storage Administration Guide
tuning VDO are the number of threads assigned to different types of work, the CPU affinity settings for
those threads, and cache settings.
LBNs are divided into chunks (a block map page contains a bit over 3 MB of LBNs) and these chunks
are grouped into zones that are divided up among the threads.
Processing should be distributed fairly evenly across the threads, though some unlucky access
patterns may occasionally concentrate work in one thread or another. For example, frequent access
to LBNs within a given block map page will cause one of the logical threads to process all of those
operations.
The number of logical zone threads can be controlled using the --vdoLogicalThreads=thread
count option of the vdo command
Like LBNs, PBNs are divided into chunks called slabs, which are further divided into zones and
assigned to worker threads that distribute the processing load.
280
CHAPTER 29. VDO INTEGRATION
If these threads are frequently shown in D state by ps or top utilities, then VDO is frequently keeping
the storage system busy with I/O requests. This is generally good if the storage system can service
multiple requests in parallel, as some SSDs can, or if the request processing is pipelined. If thread
CPU utilization is very low during these periods, it may be possible to reduce the number of I/O
submission threads.
CPU usage and memory contention are dependent on the device driver(s) beneath VDO. If CPU
utilization per I/O request increases as more threads are added then check for CPU, memory, or lock
contention in those device drivers.
The number of I/O submission threads can be controlled using the --vdoBioThreads=thread
count option of the vdo command.
CPU-processing threads
kvdo:cpuQ threads exist to perform any CPU-intensive work such as computing hash values or
compressing data blocks that do not block or require exclusive access to data structures associated
with other thread types.
Deduplication thread
The kvdo:dedupeQ thread takes queued I/O requests and contacts UDS. Since the socket buffer
can fill up if the server cannot process requests quickly enough or if kernel memory is constrained by
other system activity, this work is done by a separate thread so if a thread should block, other VDO
processing can continue. There is also a timeout mechanism in place to skip an I/O request after a
long delay (several seconds).
Journal thread
The kvdo:journalQ thread updates the recovery journal and schedules journal blocks for writing. A
VDO device uses only one journal, so this work cannot be split across threads.
Packer thread
The kvdo:packerQ thread, active in the write path when compression is enabled, collects data
blocks compressed by the kvdo:cpuQ threads to minimize wasted space. There is one packer data
structure, and thus one packer thread, per VDO device.
29.6.3.1. CPU/memory
281
Storage Administration Guide
The logical, physical, cpu, and I/O acknowledgement work can be spread across multiple threads, the
number of which can be specified during initial configuration or later if the VDO device is restarted.
One core, or one thread, can do a finite amount of work during a given time. Having one thread compute
all data-block hash values, for example, would impose a hard limit on the number of data blocks that
could be processed per second. Dividing the work across multiple threads (and cores) relieves that
bottleneck.
As a thread or core approaches 100% usage, more work items will tend to queue up for processing.
While this may result in CPU having fewer idle cycles, queueing delays and latency for individual I/O
requests will typically increase. According to some queueing theory models, utilization levels above 70%
or 80% can lead to excessive delays that can be several times longer than the normal processing time.
Thus it may be helpful to distribute work further for a thread or core with 50% or higher utilization, even if
those threads or cores are not always busy.
In the opposite case, where a thread or CPU is very lightly loaded (and thus very often asleep), supplying
work for it to do is more likely to incur some additional cost. (A thread attempting to wake another thread
must acquire a global lock on the scheduler's data structures, and may potentially send an inter-
processor interrupt to transfer work to another core). As more cores are configured to run VDO threads, it
becomes less likely that a given piece of data will be cached as work is moved between threads or as
threads are moved between cores — so too much work distribution can also degrade performance.
The work performed by the logical, physical, and CPU threads per I/O request will vary based on the
type of workload, so systems should be tested with the different types of workloads they are expected to
service.
Write operations in sync mode involving successful deduplication will entail extra I/O operations (reading
the previously stored data block), some CPU cycles (comparing the new data block to confirm that they
match), and journal updates (remapping the LBN to the previously-stored data block's PBN) compared to
writes of new data. When duplication is detected in async mode, data write operations are avoided at the
cost of the read and compare operations described above; only one journal update can happen per write,
whether or not duplication is detected.
If compression is enabled, reads and writes of compressible data will require more processing by the
CPU threads.
Blocks containing all zero bytes (a zero block) are treated specially, as they commonly occur. A special
entry is used to represent such data in the block map, and the zero block is not written to or read from the
storage device. Thus, tests that write or read all-zero blocks may produce misleading results. The same
is true, to a lesser degree, of tests that write over zero blocks or uninitialized blocks (those that were
never written since the VDO device was created) because reference count updates done by the physical
threads are not required for zero or uninitialized blocks.
Acknowledging I/O operations is the only task that is not significantly affected by the type of work being
done or the data being operated upon, as one callback is issued per I/O operation.
Accessing memory across NUMA node boundaries takes longer than accessing memory on the local
node. With Intel processors sharing the last-level cache between cores on a node, cache contention
between nodes is a much greater problem than cache contention within a node.
Tools such as top can not distinguish between CPU cycles that do work and cycles that are stalled.
These tools interpret cache contention and slow memory accesses as actual work. As a result, moving a
thread between nodes may appear to reduce the thread's apparent CPU utilization while increasing the
282
CHAPTER 29. VDO INTEGRATION
While many of VDO's kernel threads maintain data structures that are accessed by only one thread, they
do frequently exchange messages about the I/O requests themselves. Contention may be high if VDO
threads are run on multiple nodes, or if threads are reassigned from one node to another by the
scheduler. If it is possible to run other VDO-related work (such as I/O submissions to VDO, or interrupt
processing for the storage device) on the same node as the VDO threads, contention may be further
reduced. If one node does not have sufficient cycles to run all VDO-related work, memory contention
should be considered when selecting threads to move onto other nodes.
If practical, collect VDO threads on one node using the taskset utility. If other VDO-related work can
also be run on the same node, that may further reduce contention. In that case, if one node lacks the
CPU power to keep up with processing demands then memory contention must be considered when
choosing threads to move onto other nodes. For example, if a storage device's driver has a significant
number of data structures to maintain, it may help to move both the device's interrupt handling and
VDO's I/O submissions (the bio threads that call the device's driver code) to another node. Keeping I/O
acknowledgment (ack threads) and higher-level I/O submission threads (user-mode threads doing direct
I/O, or the kernel's page cache flush thread) paired is also good practice.
Performance measurements are further complicated by CPUs that dynamically vary their frequencies
based on workload, because the time needed to accomplish a specific piece of work may vary due to
other work the CPU has been doing, even without task switching or cache contention.
29.6.3.2. Caching
VDO caches a number of block map pages for efficiency. The cache size defaults to 128 MB, but it can
be increased with the --blockMapCacheSize=megabytes option of the vdo command. Using a
larger cache may produce significant benefits for random-access workloads.
A second cache may be used for caching data blocks read from the storage system to verify VDO's
deduplication advice. If similar data blocks are seen within a short time span, the number of I/O
operations needed may be reduced.
The read cache also holds storage blocks containing compressed user data. If multiple compressible
blocks were written within a short period of time, their compressed versions may be located within the
same storage system block. Likewise, if they are read within a short time, caching may avoid the need
for additional reads from the storage system.
The vdo command's --readCache={enabled | disabled} option controls whether a read cache is
used. If enabled, the cache has a minimum size of 8 MB, but it can be increased with the --
readCacheSize=megabytes option. Managing the read cache incurs a slight overhead, so it may not
increase performance if the storage system is fast enough. The read cache is disabled by default.
283
Storage Administration Guide
For generic hard drives in a RAID configuration, one or two bio threads may be sufficient for submitting
I/O operations. If the storage device driver requires its I/O submission threads to do significantly more
work (updating driver data structures or communicating with the device) such that one or two threads are
very busy and storage devices are often idle, the bio thread count can be increased to compensate.
However, depending on the driver implementation, raising the thread count too high may lead to cache or
spin lock contention. If device access timing is not uniform across all NUMA nodes, it may be helpful to
run bio threads on the node "closest" to the storage device controllers.
If a device driver does significant work in its interrupt handler and does not use a threaded IRQ handler,
it may prevent the scheduler from providing the best performance. CPU time spent servicing hardware
interrupts may look like normal VDO (or other) kernel thread execution in some ways. For example, if
hardware IRQ handling required 30% of a core's cycles, a busy kernel thread on the same core could
only use the remaining 70%. However, if the work queued up for that thread demanded 80% of the core's
cycles, the thread would never catch up, and the scheduler might simply leave that thread to run
impeded on that core instead of switching that thread to a less busy core.
Using such a device driver under a heavy VDO workload may require a large number of cycles to service
hardware interrupts (the %hi indicator in the header of the top display). In that case it may help to
assign IRQ handling to certain cores and adjust the CPU affinity of VDO kernel threads not to run on
those cores.
The maximum allowed size of DISCARD (TRIM) operations to a VDO device can be tuned via
/sys/kvdo/max_discard_sectors, based on system usage. The default is 8 sectors (that is, one 4
KB block). Larger sizes may be specified, though VDO will still process them in a loop, one block at a
time, ensuring that metadata updates for one discarded block are written to the journal and flushed to
disk before starting on the next block.
When using a VDO volume as a local file system, Red Hat testing found that a small discard size works
best, as the generic block-device code in the Linux kernel will break large discard requests into multiple
smaller ones and submit them in parallel. If there is low I/O activity on the device, VDO can process
many smaller requests concurrently and much more quickly than one large request.
If the VDO device is to be used as a SCSI target, the initiator and target software introduce additional
factors to consider. If the target SCSI software is SCST, it reads the maximum discard size and relays it
to the initiator. (Red Hat has not attempted to tune VDO configurations in conjunction with LIO SCSI
target code.)
Because the Linux SCSI initiator code allows only one discard operation at a time, discard requests that
exceed the maximum size would be broken into multiple smaller discards and sent, one at a time, to the
target system (and to VDO). So, in addition to VDO processing a number of small discard operations in
serial, the round-trip communication time between the two systems adds additional latency.
Setting a larger maximum discard size can reduce this communication overhead, though that larger
request is passed in its entirety to VDO and processed one 4 KB block at a time. While there is no per-
block communication delay, additional processing time for the larger block may cause the SCSI initiator
software to time out.
For SCSI target usage, Red Hat recommends configuring the maximum discard size to be moderately
284
CHAPTER 29. VDO INTEGRATION
large while still keeping the typical discard time well within the initiator's timeout setting. An extra round-
trip cost every few seconds, for example, should not significantly affect performance and SCSI initiators
with timeouts of 30 or 60 seconds should not time out.
Thread or CPU utilization above 70%, as seen in utilities such as top or ps, generally implies that too
much work is being concentrated in one thread or on one CPU. However, in some cases it could mean
that a VDO thread was scheduled to run on the CPU but no work actually happened; this scenario could
occur with excessive hardware interrupt handler processing, memory contention between cores or
NUMA nodes, or contention for a spin lock.
When using the top utility to examine system performance, Red Hat suggests running top -H to show
all process threads separately and then entering the 1 f j keys, followed by the Enter/Return key; the
top command then displays the load on individual CPU cores and identifies the CPU on which each
process or thread last ran. This information can provide the following insights:
If a core has low %id (idle) and %wa (waiting-for-I/O) values, it is being kept busy with work of
some kind.
If the %hi value for a core is very low, that core is doing normal processing work, which is being
load-balanced by the kernel scheduler. Adding more cores to that set may reduce the load as
long as it does not introduce NUMA contention.
If the %hi for a core is more than a few percent and only one thread is assigned to that core, and
%id and %wa are zero, the core is over-committed and the scheduler is not addressing the
situation. In this case the kernel thread or the device interrupt handling should be reassigned to
keep them on separate cores.
The perf utility can examine the performance counters of many CPUs. Red Hat suggests using the
perf top subcommand as a starting point to examine the work a thread or processor is doing. If, for
example, the bioQ threads are spending many cycles trying to acquire spin locks, there may be too
much contention in the device driver below VDO, and reducing the number of bioQ threads might
alleviate the situation. High CPU use (in acquiring spin locks or elsewhere) could also indicate contention
between NUMA nodes if, for example, the bioQ threads and the device interrupt handler are running on
different nodes. If the processor supports them, counters such as stalled-cycles-backend,
cache-misses, and node-load-misses may be of interest.
The sar utility can provide periodic reports on multiple system statistics. The sar -d 1 command
reports block device utilization levels (percentage of the time they have at least one I/O operation in
progress) and queue lengths (number of I/O requests waiting) once per second. However, not all block
device drivers can report such information, so the sar usefulness might depend on the device drivers in
use.
vdo
The vdo utility manages both the kvdo and UDS components of VDO.
285
Storage Administration Guide
vdostats
The vdostats utility displays statistics for each configured (or specified) device in a format similar to
the Linux df utility.
29.7.1. vdo
The vdo utility manages both the kvdo and UDS components of VDO.
Synopsis
Sub-Commands
Sub-Command Description
286
CHAPTER 29. VDO INTEGRATION
Sub-Command Description
create Creates a VDO volume and its associated index and makes it available. If
−−activate=disabled is specified the VDO volume is created but
not made available. Will not overwrite an existing file system or
formatted VDO volume unless −−force is given. This command must
be run with root privileges. Applicable options include:
--name=volume (required)
--device=device (required)
--activate={enabled | disabled}
--indexMem=gigabytes
--blockMapCacheSize=megabytes
--blockMapPeriod=period
--compression={enabled | disabled}
--confFile=file
--deduplication={enabled | disabled}
--emulate512={enabled | disabled}
--sparseIndex={enabled | disabled}
--vdoAckThreads=thread count
--vdoBioRotationInterval=I/O count
--vdoBioThreads=thread count
--vdoCpuThreads=thread count
--vdoHashZoneThreads=thread count
--vdoLogicalThreads=thread count
--vdoLogLevel=level
--vdoLogicalSize=megabytes
--vdoPhysicalThreads=thread count
--readCache={enabled | disabled}
--readCacheSize=megabytes
--vdoSlabSize=megabytes
--verbose
--logfile=pathname
287
Storage Administration Guide
Sub-Command Description
remove Removes one or more stopped VDO volumes and associated indexes.
This command must be run with root privileges. Applicable options
include:
--confFile=file
--force
--verbose
--logfile=pathname
start Starts one or more stopped, activated VDO volumes and associated
services. This command must be run with root privileges. Applicable
options include:
--confFile=file
--forceRebuild
--verbose
--logfile=pathname
stop Stops one or more running VDO volumes and associated services. This
command must be run with root privileges. Applicable options include:
--confFile=file
--force
--verbose
--logfile=pathname
activate Activates one or more VDO volumes. Activated volumes can be started
using the
start
--confFile=file
--logfile=pathname
--verbose
288
CHAPTER 29. VDO INTEGRATION
Sub-Command Description
--confFile=file
--verbose
--logfile=pathname
status Reports VDO system and volume status in YAML format. This command
does not require root privileges though information will be incomplete if
run without. Applicable options include:
--confFile=file
--verbose
--logfile=pathname
See Table 29.6, “VDO Status Output” for the output provided.
--all
--confFile=file
--logfile=pathname
--verbose
289
Storage Administration Guide
Sub-Command Description
--blockMapCacheSize=megabytes
--blockMapPeriod=period
--confFile=file
--vdoAckThreads=thread count
--vdoBioThreads=thread count
--vdoCpuThreads=thread count
--vdoHashZoneThreads=thread count
--vdoLogicalThreads=thread count
--vdoPhysicalThreads=thread count
--readCache={enabled | disabled}
--readCacheSize=megabytes
--verbose
--logfile=pathname
changeWritePolicy Modifies the write policy of one or all running VDO volumes. This
command must be run with root privileges.
(required)
--confFile=file
--logfile=pathname
--verbose
290
CHAPTER 29. VDO INTEGRATION
Sub-Command Description
--confFile=file
--verbose
--logfile=pathname
--confFile=file
--verbose
--logfile=pathname
enableCompression Enables compression on one or more VDO volumes. If the VDO volume
is running, takes effect immediately. If the VDO volume is not running
compression will be enabled the next time the VDO volume is started.
This command must be run with root privileges. Applicable options
include:
--confFile=file
--verbose
--logfile=pathname
disableCompression Disables compression on one or more VDO volumes. If the VDO volume
is running, takes effect immediately. If the VDO volume is not running
compression will be disabled the next time the VDO volume is started.
This command must be run with root privileges. Applicable options
include:
--confFile=file
--verbose
--logfile=pathname
291
Storage Administration Guide
Sub-Command Description
growLogical Adds logical space to a VDO volume. The volume must exist and must be
running. This command must be run with root privileges. Applicable
options include:
--name=volume (required)
--vdoLogicalSize=megabytes
(required)
--confFile=file
--verbose
--logfile=pathname
growPhysical Adds physical space to a VDO volume. The volume must exist and must
be running. This command must be run with root privileges. Applicable
options include:
--name=volume (required)
--confFile=file
--verbose
--logfile=pathname
printConfigFile Prints the configuration file to stdout . This command require root
privileges. Applicable options include:
--confFile=file
--logfile=pathname
--verbose
Options
Option Description
--indexMem=gigabytes Specifies the amount of UDS server memory in gigabytes; the default
size is 1 GB. The special decimal values 0.25, 0.5, 0.75 can be used, as
can any positive integer.
--all Indicates that the command should be applied to all configured VDO
volumes. May not be used with --name .
292
CHAPTER 29. VDO INTEGRATION
Option Description
-- Specifies the amount of memory allocated for caching block map pages;
blockMapCacheSize=mega the value must be a multiple of 4096. Using a value with a B (ytes),
bytes K (ilobytes), M (egabytes), G (igabytes), T (erabytes), P (etabytes) or
E (xabytes) suffix is optional. If no suffix is supplied, the value will be
interpreted as megabytes. The default is 128M; the value must be at
least 128M and less than 16T. Note that there is a memory overhead of
15%.
--compression={enabled Enables or disables compression within the VDO device. The default is
| disabled} enabled. Compression may be disabled if necessary to maximize
performance or to speed processing of data that is unlikely to compress.
--deduplication= Enables or disables deduplication within the VDO device. The default is
{enabled | disabled} enabled. Deduplication may be disabled in instances where data is not
expected to have good deduplication rates but compression is still
desired.
--forceRebuild Forces an offline rebuild before starting a read-only VDO volume so that
it may be brought back online and made available. This option may
result in data loss or corruption.
--logfile=pathname Specify the file to which this script's log messages are directed. Warning
and error messages are always logged to syslog as well.
--name=volume Operates on the specified VDO volume. May not be used with --all.
--device=device Specifies the absolute path of the device to use for VDO storage.
--activate={enabled | The argument disabled indicates that the VDO volume should only be
disabled} created. The volume will not be started or enabled. The default is
enabled.
293