=head1 NAME
perlipc - Perl interprocess communication (signals, fifos, pipes, safe subprocesses, sockets, and semaphores)
=head1 DESCRIPTION
The basic IPC facilities of Perl are built out of the good old Unix
signals, named pipes, pipe opens, the Berkeley socket routines, and SysV
IPC calls. Each is used in slightly different situations.
=head1 Signals
Perl uses a simple signal handling model: the %SIG hash contains names
or references of user-installed signal handlers. These handlers will
be called with an argument which is the name of the signal that
triggered it. A signal may be generated intentionally from a
particular keyboard sequence like control-C or control-Z, sent to you
from another process, or triggered automatically by the kernel when
special events transpire, like a child process exiting, your own process
running out of stack space, or hitting a process file-size limit.
For example, to trap an interrupt signal, set up a handler like this:
our $shucks;
sub catch_zap {
my $signame = shift;
$shucks++;
die "Somebody sent me a SIG$signame";
}
$SIG{INT} = __PACKAGE__ . "::catch_zap";
$SIG{INT} = \&catch_zap; # best strategy
Prior to Perl 5.8.0 it was necessary to do as little as you possibly
could in your handler; notice how all we do is set a global variable
and then raise an exception. That's because on most systems,
libraries are not re-entrant; particularly, memory allocation and I/O
routines are not. That meant that doing nearly I in your
handler could in theory trigger a memory fault and subsequent core
dump - see L below.
The names of the signals are the ones listed out by C on your
system, or you can retrieve them using the CPAN module L<:signal>.
You may also choose to assign the strings C or C as
the handler, in which case Perl will try to discard the signal or do the
default thing.
On most Unix platforms, the C (sometimes also known as C) signal
has special behavior with respect to a value of C.
Setting C to C on such a platform has the effect of
not creating zombie processes when the parent process fails to C
on its child processes (i.e., child processes are automatically reaped).
Calling C with C set to C usually returns
C on such platforms.
Some signals can be neither trapped nor ignored, such as the KILL and STOP
(but not the TSTP) signals. Note that ignoring signals makes them disappear.
If you only want them blocked temporarily without them getting lost you'll
have to use the C module's L.
Sending a signal to a negative process ID means that you send the signal
to the entire Unix process group. This code sends a hang-up signal to all
processes in the current process group, and also sets $SIG{HUP} to C
so it doesn't kill itself:
# block scope for local
{
local $SIG{HUP} = "IGNORE";
kill HUP => -getpgrp();
# snazzy writing of: kill("HUP", -getpgrp())
}
Another interesting signal to send is signal number zero. This doesn't
actually affect a child process, but instead checks whether it's alive
or has changed its UIDs.
unless (kill 0 => $kid_pid) {
warn "something wicked happened to $kid_pid";
}
Signal number zero may fail because you lack permission to send the
signal when directed at a process whose real or saved UID is not
identical to the real or effective UID of the sending process, even
though the process is alive. You may be able to determine the cause of
failure using C or C.
unless (kill(0 => $pid) || $!{EPERM}) {
warn "$pid looks dead";
}
You might also want to employ anonymous functions for simple signal
handlers:
$SIG{INT} = sub { die "\nOutta here!\n" };
SIGCHLD handlers require some special care. If a second child dies
while in the signal handler caused by the first death, we won't get
another signal. So must loop here else we will leave the unreaped child
as a zombie. And the next time two children die we get another zombie.
And so on.
use POSIX ":sys_wait_h";
$SIG{CHLD} = sub {
while ((my $child = waitpid(-1, WNOHANG)) > 0) {
$Kid_Status{$child} = $?;
}
};
# do something that forks...
Be careful: qx(), system(), and some modules for calling external commands
do a fork(), then wait() for the result. Thus, your signal handler
will be called. Because wait() was already called by system() or qx(),
the wait() in the signal handler will see no more zombies and will
therefore block.
The best way to prevent this issue is to use waitpid(), as in the following
example:
use POSIX ":sys_wait_h"; # for nonblocking read
my %children;
$SIG{CHLD} = sub {
# don't change $! and $? outside handler
local ($!, $?);
while ( (my $pid = waitpid(-1, WNOHANG)) > 0 ) {
delete $children{$pid};
cleanup_child($pid, $?);
}
};
while (1) {
my $pid = fork();
die "cannot fork" unless defined $pid;
if ($pid == 0) {
# ...
exit 0;
} else {
$children{$pid}=1;
# ...
system($command);
# ...
}
}
Signal handling is also used for timeouts in Unix. While safely
protected within an C block, you set a signal handler to trap
alarm signals and then schedule to have one delivered to you in some
number of seconds. Then try your blocking operation, clearing the alarm
when it's done but not before you've exited your C block. If it
goes off, you'll use die() to jump out of the block.
Here's an example:
my $ALARM_EXCEPTION = "alarm clock restart";
eval {
local $SIG{ALRM} = sub { die $ALARM_EXCEPTION };
alarm 10;
flock($fh, 2) # blocking write lock
|| die "cannot flock: $!";
alarm 0;
};
if ($@ && $@ !~ quotemeta($ALARM_EXCEPTION)) { die }
If the operation being timed out is system() or qx(), this technique
is liable to generate zombies. If this matters to you, you'll
need to do your own fork() and exec(), and kill the errant child process.
For more complex signal handling, you might see the standard POSIX
module. Lamentably, this is almost entirely undocumented, but the
F file from the Perl source distribution has
some examples in it.
=head2 Handling the SIGHUP Signal in Daemons
A process that usually starts when the system boots and shuts down
when the system is shut down is called a daemon (Disk And Execution
MONitor). If a daemon process has a configuration file which is
modified after the process has been started, there should be a way to
tell that process to reread its configuration file without stopping
the process. Many daemons provide this mechanism using a C
signal handler. When you want to tell the daemon to reread the file,
simply send it the C signal.
The following example implements a simple daemon, which restarts
itself every time the C signal is received. The actual code is
located in the subroutine C, which just prints some debugging
info to show that it works; it should be replaced with the real code.
#!/usr/bin/perl
use v5.36;
use POSIX ();
use FindBin ();
use File::Basename ();
use File::Spec::Functions qw(catfile);
$| = 1;
# make the daemon cross-platform, so exec always calls the script
# itself with the right path, no matter how the script was invoked.
my $script = File::Basename::basename($0);
my $SELF = catfile($FindBin::Bin, $script);
# POSIX unmasks the sigprocmask properly
$SIG{HUP} = sub {
print "got SIGHUP\n";
exec($SELF, @ARGV) || die "$0: couldn't restart: $!";
};
code();
sub code {
print "PID: $$\n";
print "ARGV: @ARGV\n";
my $count = 0;
while (1) {
sleep 2;
print ++$count, "\n";
}
}
=head2 Deferred Signals (Safe Signals)
Before Perl 5.8.0, installing Perl code to deal with signals exposed you to
danger from two things. First, few system library functions are
re-entrant. If the signal interrupts while Perl is executing one function
(like malloc(3) or printf(3)), and your signal handler then calls the same
function again, you could get unpredictable behavior--often, a core dump.
Second, Perl isn't itself re-entrant at the lowest levels. If the signal
interrupts Perl while Perl is changing its own internal data structures,
similarly unpredictable behavior may result.
There were two things you could do, knowing this: be paranoid or be
pragmatic. The paranoid approach was to do as little as possible in your
signal handler. Set an existing integer variable that already has a
value, and return. This doesn't help you if you're in a slow system call,
which will just restart. That means you have to C to longjmp(3) out
of the handler. Even this is a little cavalier for the true paranoiac,
who avoids C in a handler because the system I out to get you.
The pragmatic approach was to say "I know the risks, but prefer the
convenience", and to do anything you wanted in your signal handler,
and be prepared to clean up core dumps now and again.
Perl 5.8.0 and later avoid these problems by "deferring" signals. That is,
when the signal is delivered to the process by the system (to the C code
that implements Perl) a flag is set, and the handler returns immediately.
Then at strategic "safe" points in the Perl interpreter (e.g. when it is
about to execute a new opcode) the flags are checked and the Perl level
handler from %SIG is executed. The "deferred" scheme allows much more
flexibility in the coding of signal handlers as we know the Perl
interpreter is in a safe state, and that we are not in a system library
function when the handler is called. However the implementation does
differ from previous Perls in the following ways:
=over 4
=item Long-running opcodes
As the Perl interpreter looks at signal flags only when it is about
to execute a new opcode, a signal that arrives during a long-running
opcode (e.g. a regular expression operation on a very large string) will
not be seen until the current opcode completes.
If a signal of any given type fires multiple times during an opcode
(such as from a fine-grained timer), the handler for that signal will
be called only once, after the opcode completes; all other
instances will be discarded. Furthermore, if your system's signal queue
gets flooded to the point that there are signals that have been raised
but not yet caught (and thus not deferred) at the time an opcode
completes, those signals may well be caught and deferred during
subsequent opcodes, with sometimes surprising results. For example, you
may see alarms delivered even after calling C as the latter
stops the raising of alarms but does not cancel the delivery of alarms
raised but not yet caught. Do not depend on the behaviors described in
this paragraph as they are side effects of the current implementation and
may change in future versions of Perl.
=item Interrupting IO
When a signal is delivered (e.g., SIGINT from a control-C) the operating
system breaks into IO operations like I(2), which is used to
implement Perl's readline() function, the C >> operator. On older
Perls the handler was called immediately (and as C is not "unsafe",
this worked well). With the "deferred" scheme the handler is I called
immediately, and if Perl is using the system's C library that
library may restart the C without returning to Perl to give it a
chance to call the %SIG handler. If this happens on your system the
solution is to use the C<:perlio> layer to do IO--at least on those handles
that you want to be able to break into with signals. (The C<:perlio> layer
checks the signal flags and calls %SIG handlers before resuming IO
operation.)
The default in Perl 5.8.0 and later is to automatically use
the C<:perlio> layer.
Note that it is not advisable to access a file handle within a signal
handler where that signal has interrupted an I/O operation on that same
handle. While perl will at least try hard not to crash, there are no
guarantees of data integrity; for example, some data might get dropped or
written twice.
Some networking library functions like gethostbyname() are known to have
their own implementations of timeouts which may conflict with your
timeouts. If you have problems with such functions, try using the POSIX
sigaction() function, which bypasses Perl safe signals. Be warned that
this does subject you to possible memory corruption, as described above.
Instead of setting C:
local $SIG{ALRM} = sub { die "alarm" };
try something like the following:
use POSIX qw(SIGALRM);
POSIX::sigaction(SIGALRM,
POSIX::SigAction->new(sub { die "alarm" }))
|| die "Error setting SIGALRM handler: $!\n";
Another way to disable the safe signal behavior locally is to use
the C<:unsafe::signals> module from CPAN, which affects
all signals.
=item Restartable system calls
On systems that supported it, older versions of Perl used the
SA_RESTART flag when installing %SIG handlers. This meant that
restartable system calls would continue rather than returning when
a signal arrived. In order to deliver deferred signals promptly,
Perl 5.8.0 and later do I use SA_RESTART. Consequently,
restartable system calls can fail (with $! set to C) in places
where they previously would have succeeded.
The default C<:perlio> layer retries C, C
and C as described above; interrupted C and
C calls will always be retried.
=item Signals as "faults"
Certain signals like SEGV, ILL, BUS and FPE are generated by virtual memory
addressing errors and similar "faults". These are normally fatal: there is
little a Perl-level handler can do with them. So Perl delivers them
immediately rather than attempting to defer them.
It is possible to catch these with a C handler (see L),
but on top of the usual problems of "unsafe" signals the signal is likely
to get rethrown immediately on return from the signal handler, so such
a handler should C or C instead.
=item Signals triggered by operating system state
On some operating systems certain signal handlers are supposed to "do
something" before returning. One example can be CHLD or CLD, which
indicates a child process has completed. On some operating systems the
signal handler is expected to C for the completed child
process. On such systems the deferred signal scheme will not work for
those signals: it does not do the C. Again the failure will
look like a loop as the operating system will reissue the signal because
there are completed child processes that have not yet been Ced for.
=back
If you want the old signal behavior back despite possible
memory corruption, set the environment variable C to
C. This feature first appeared in Perl 5.8.1.
=head1 Named Pipes
A named pipe (often referred to as a FIFO) is an old Unix IPC
mechanism for processes communicating on the same machine. It works
just like regular anonymous pipes, except that the
processes rendezvous using a filename and need not be related.
To create a named pipe, use the C<:mkfifo> function.
use POSIX qw(mkfifo);
mkfifo($path, 0700) || die "mkfifo $path failed: $!";
You can also use the Unix command mknod(1), or on some
systems, mkfifo(1). These may not be in your normal path, though.
# system return val is backwards, so && not ||
#
$ENV{PATH} .= ":/etc:/usr/etc";
if ( system("mknod", $path, "p")
&& system("mkfifo", $path) )
{
die "mk{nod,fifo} $path failed";
}
A fifo is convenient when you want to connect a process to an unrelated
one. When you open a fifo, the program will block until there's something
on the other end.
For example, let's say you'd like to have your F<.signature> file be a
named pipe that has a Perl program on the other end. Now every time any
program (like a mailer, news reader, finger program, etc.) tries to read
from that file, the reading program will read the new signature from your
program. We'll use the pipe-checking file-test operator, B, to find
out whether anyone (or anything) has accidentally removed our fifo.
chdir(); # go home
my $FIFO = ".signature";
while (1) {
unless (-p $FIFO) {
unlink $FIFO; # discard any failure, will catch later
require POSIX; # delayed loading of heavy module
POSIX::mkfifo($FIFO, 0700)
|| die "can't mkfifo $FIFO: $!";
}
# next line blocks till there's a reader
open (my $fh, ">", $FIFO) || die "can't open $FIFO: $!";
print $fh "John Smith (smith\@host.org)\n", `fortune -s`;
close($fh) || die "can't close $FIFO: $!";
sleep 2; # to avoid dup signals
}
=head1 Using open() for IPC
Perl's basic open() statement can also be used for unidirectional
interprocess communication by specifying the open mode as C or C.
Here's how to start
something up in a child process you intend to write to:
open(my $spooler, "|-", "cat -v | lpr -h 2>/dev/null")
|| die "can't fork: $!";
local $SIG{PIPE} = sub { die "spooler pipe broke" };
print $spooler "stuff\n";
close $spooler || die "bad spool: $! $?";
And here's how to start up a child process you intend to read from:
open(my $status, "-|", "netstat -an 2>&1")
|| die "can't fork: $!";
while () {
next if /^(tcp|udp)/;
print;
}
close $status || die "bad netstat: $! $?";
Be aware that these operations are full Unix forks, which means they may
not be correctly implemented on all alien systems. See L
for portability details.
In the two-argument form of open(), a pipe open can be achieved by
either appending or prepending a pipe symbol to the second argument:
open(my $spooler, "| cat -v | lpr -h 2>/dev/null")
|| die "can't fork: $!";
open(my $status, "netstat -an 2>&1 |")
|| die "can't fork: $!";
This can be used even on systems that do not support forking, but this
possibly allows code intended to read files to unexpectedly execute
programs. If one can be sure that a particular program is a Perl script
expecting filenames in @ARGV using the two-argument form of open() or the
C >> operator, the clever programmer can write something like this:
% program f1 "cmd1|" - f2 "cmd2|" f3 , the process F, standard input (F
in this case), the F file, the F command, and finally the F
file. Pretty nifty, eh?
You might notice that you could use backticks for much the
same effect as opening a pipe for reading:
print grep { !/^(tcp|udp)/ } `netstat -an 2>&1`;
die "bad netstatus ($?)" if $?;
While this is true on the surface, it's much more efficient to process the
file one line or record at a time because then you don't have to read the
whole thing into memory at once. It also gives you finer control of the
whole process, letting you kill off the child process early if you'd like.
Be careful to check the return values from both open() and close(). If
you're I to a pipe, you should also trap SIGPIPE. Otherwise,
think of what happens when you start up a pipe to a command that doesn't
exist: the open() will in all likelihood succeed (it only reflects the
fork()'s success), but then your output will fail--spectacularly. Perl
can't know whether the command worked, because your command is actually
running in a separate process whose exec() might have failed. Therefore,
while readers of bogus commands return just a quick EOF, writers
to bogus commands will get hit with a signal, which they'd best be prepared
to handle. Consider:
open(my $fh, "|-", "bogus") || die "can't fork: $!";
print $fh "bang\n"; # neither necessary nor sufficient
# to check print retval!
close($fh) || die "can't close: $!";
The reason for not checking the return value from print() is because of
pipe buffering; physical writes are delayed. That won't blow up until the
close, and it will blow up with a SIGPIPE. To catch it, you could use
this:
$SIG{PIPE} = "IGNORE";
open(my $fh, "|-", "bogus") || die "can't fork: $!";
print $fh "bang\n";
close($fh) || die "can't close: status=$?";
=head2 Filehandles
Both the main process and any child processes it forks share the same
STDIN, STDOUT, and STDERR filehandles. If both processes try to access
them at once, strange things can happen. You may also want to close
or reopen the filehandles for the child. You can get around this by
opening your pipe with open(), but on some systems this means that the
child process cannot outlive the parent.
=head2 Background Processes
You can run a command in the background with:
system("cmd &");
The command's STDOUT and STDERR (and possibly STDIN, depending on your
shell) will be the same as the parent's. You won't need to catch
SIGCHLD because of the double-fork taking place; see below for details.
=head2 Complete Dissociation of Child from Parent
In some cases (starting server processes, for instance) you'll want to
completely dissociate the child process from the parent. This is
often called daemonization. A well-behaved daemon will also chdir()
to the root directory so it doesn't prevent unmounting the filesystem
containing the directory from which it was launched, and redirect its
standard file descriptors from and to F so that random
output doesn't wind up on the user's terminal.
use POSIX "setsid";
sub daemonize {
chdir("/") || die "can't chdir to /: $!";
open(STDIN, "", "/dev/null") || die "can't write /dev/null: $!";
defined(my $pid = fork()) || die "can't fork: $!";
exit if $pid; # non-zero now means I am the parent
(setsid() != -1) || die "Can't start a new session: $!";
open(STDERR, ">&", STDOUT) || die "can't dup stdout: $!";
}
The fork() has to come before the setsid() to ensure you aren't a
process group leader; the setsid() will fail if you are. If your
system doesn't have the setsid() function, open F and use the
C ioctl() on it instead. See tty(4) for details.
Non-Unix users should check their C::Process >> module for
other possible solutions.
=head2 Safe Pipe Opens
Another interesting approach to IPC is making your single program go
multiprocess and communicate between--or even amongst--yourselves. The
two-argument form of the
open() function will accept a file argument of either C or C
to do a very interesting thing: it forks a child connected to the
filehandle you've opened. The child is running the same program as the
parent. This is useful for safely opening a file when running under an
assumed UID or GID, for example. If you open a pipe I minus, you can
write to the filehandle you opened and your kid will find it in I
STDIN. If you open a pipe I minus, you can read from the filehandle
you opened whatever your kid writes to I STDOUT.
my $PRECIOUS = "/path/to/some/safe/file";
my $sleep_count;
my $pid;
my $kid_to_write;
do {
$pid = open($kid_to_write, "|-");
unless (defined $pid) {
warn "cannot fork: $!";
die "bailing out" if $sleep_count++ > 6;
sleep 10;
}
} until defined $pid;
if ($pid) { # I am the parent
print $kid_to_write @some_data;
close($kid_to_write) || warn "kid exited $?";
} else { # I am the child
# drop permissions in setuid and/or setgid programs:
($>, $)) = ($", $PRECIOUS)
|| die "can't open $PRECIOUS: $!";
while () {
print $outfile; # child STDIN is parent $kid_to_write
}
close($outfile) || die "can't close $PRECIOUS: $!";
exit(0); # don't forget this!!
}
Another common use for this construct is when you need to execute
something without the shell's interference. With system(), it's
straightforward, but you can't use a pipe open or backticks safely.
That's because there's no way to stop the shell from getting its hands on
your arguments. Instead, use lower-level control to call exec() directly.
Here's a safe backtick or pipe open for read:
my $pid = open(my $kid_to_read, "-|");
defined($pid) || die "can't fork: $!";
if ($pid) { # parent
while () {
# do something interesting
}
close($kid_to_read) || warn "kid exited $?";
} else { # child
($>, $)) = ($, $)) = ($ for general safety principles, but there
are extra gotchas with Safe Pipe Opens.
In particular, if you opened the pipe using C, then you
cannot simply use close() in the parent process to close an unwanted
writer. Consider this code:
my $pid = open(my $writer, "|-"); # fork open a kid
defined($pid) || die "first fork failed: $!";
if ($pid) {
if (my $sub_pid = fork()) {
defined($sub_pid) || die "second fork failed: $!";
close($writer) || die "couldn't close writer: $!";
# now do something else...
}
else {
# first write to $writer
# ...
# then when finished
close($writer) || die "couldn't close writer: $!";
exit(0);
}
}
else {
# first do something with STDIN, then
exit(0);
}
In the example above, the true parent does not want to write to the $writer
filehandle, so it closes it. However, because $writer was opened using
C, it has a special behavior: closing it calls
waitpid() (see L), which waits for the subprocess
to exit. If the child process ends up waiting for something happening
in the section marked "do something else", you have deadlock.
This can also be a problem with intermediate subprocesses in more
complicated code, which will call waitpid() on all open filehandles
during global destruction--in no predictable order.
To solve this, you must manually use pipe(), fork(), and the form of
open() which sets one file descriptor to another, as shown below:
pipe(my $reader, my $writer) || die "pipe failed: $!";
my $pid = fork();
defined($pid) || die "first fork failed: $!";
if ($pid) {
close $reader;
if (my $sub_pid = fork()) {
defined($sub_pid) || die "first fork failed: $!";
close($writer) || die "can't close writer: $!";
}
else {
# write to $writer...
# ...
# then when finished
close($writer) || die "can't close writer: $!";
exit(0);
}
# write to $writer...
}
else {
open(STDIN, " for pipes.
This is preferred when you wish to avoid having the shell interpret
metacharacters that may be in your command string.
So for example, instead of using:
open(my $ps_pipe, "-|", "ps aux") || die "can't open ps pipe: $!";
One would use either of these:
open(my $ps_pipe, "-|", "ps", "aux")
|| die "can't open ps pipe: $!";
my @ps_args = qw[ ps aux ];
open(my $ps_pipe, "-|", @ps_args)
|| die "can't open @ps_args|: $!";
Because there are more than three arguments to open(), it forks the ps(1)
command I spawning a shell, and reads its standard output via the
C filehandle. The corresponding syntax to I to command
pipes is to use C in place of C.
This was admittedly a rather silly example, because you're using string
literals whose content is perfectly safe. There is therefore no cause to
resort to the harder-to-read, multi-argument form of pipe open(). However,
whenever you cannot be assured that the program arguments are free of shell
metacharacters, the fancier form of open() should be used. For example:
my @grep_args = ("egrep", "-i", $some_pattern, @many_files);
open(my $grep_pipe, "-|", @grep_args)
|| die "can't open @grep_args|: $!";
Here the multi-argument form of pipe open() is preferred because the
pattern and indeed even the filenames themselves might hold metacharacters.
=head2 Avoiding Pipe Deadlocks
Whenever you have more than one subprocess, you must be careful that each
closes whichever half of any pipes created for interprocess communication
it is not using. This is because any child process reading from the pipe
and expecting an EOF will never receive it, and therefore never exit. A
single process closing a pipe is not enough to close it; the last process
with the pipe open must close it for it to read EOF.
Certain built-in Unix features help prevent this most of the time. For
instance, filehandles have a "close on exec" flag, which is set I
under control of the C variable. This is so any filehandles you
didn't explicitly route to the STDIN, STDOUT or STDERR of a child
I will be automatically closed.
Always explicitly and immediately call close() on the writable end of any
pipe, unless that process is actually writing to it. Even if you don't
explicitly call close(), Perl will still close() all filehandles during
global destruction. As previously discussed, if those filehandles have
been opened with Safe Pipe Open, this will result in calling waitpid(),
which may again deadlock.
=head2 Bidirectional Communication with Another Process
While this works reasonably well for unidirectional communication, what
about bidirectional communication? The most obvious approach doesn't work:
# THIS DOES NOT WORK!!
open(my $prog_for_reading_and_writing, "| some program |")
If you forget to C