[Home] [Credit Search] [Category Browser] [Staff Roll Call] | The LINUX.COM Article Archive |
Originally Published: Thursday, 23 August 2001 | Author: Subhasish Ghosh |
Published to: develop_articles/Development Articles | Page: 1/1 - [Std View] |
Understanding Linux Kernel Inter-process Communication: Pipes, FIFO & IPC (Part 1)
In this article, part one of a two part article, the prolific and talented Subhasish returns to give Linux.com readers another trip into understanding Linux kernel behavoir and programming. There's a lot of information covered here for free, so hang up your hat and have fun. Part 2 of Understanding Linux Kernel Inter-process Communication will be published tomorrow.
|
After "Understanding Re-entrant Kernels" and "Linux Kernel Synchronization", this forms the third article included within the "Linux Kernel Series" being published at Linux.com. Every reader is requested to read the first two articles of the series, because this article explores in more depth a few kernel features already introduced in the earlier articles. In this article the following topics will be covered:
Part two of this article, to be published tomorrow, will cover:
- An Overview of Process Communication in Linux.
- An Overview of Pipes, FIFOs and System V IPC.
- System V (AT&T System V.2 release of UNIX) IPC Resources: Semaphores, Message Queues & Shared Memory segments (implemented in terms of GNU/Linux).
- A few code examples to chew on (for the brave-hearted!).
Please Note:
- For explanation of words such as "kernel control paths", "semaphores", "race conditions" and related features, please refer to earlier articles in the series.
- All readers must note that though this article explores the depth of the Linux Kernel, but without the discussion of AT&T System V release of UNIX IPC features and facilities, no discussion would ever be complete. Thus, several System V UNIX features will be discussed too.
- I have had used Red Hat Linux 7.1, Linux Kernel 2.4.2-2 for compiling all the code included.
In earlier articles we have already encountered some exciting features of the Linux Kernel. This article explains how User Mode processes can synchronize themselves and exchange data. We have already covered a lot of synchronization topics, especially in "Linux Kernel Synchronization", but as readers must have noticed the main protagonist of the story there was a "Kernel Control Path" acting within the Linux Kernel and NOT User Mode programs. Thus, we are now ready to discuss synchronization of User Mode processes. These processes rely on the Linux Kernel to synchronize themselves and exchange data.
First of all, let's understand the actual meaning of: IPC. IPC is an abbreviation that stands for Inter-process Communication. It denotes a set of system calls that allows a User Mode process to:
IPC was introduced in a development UNIX variant called
"Columbus Unix" and later adopted by AT&T's System
III. It is now commonly found in most UNIX systems, including
GNU/Linux. System V IPC is more heavyweight than BSD
mmap, and provides three methods of communication: message
queues, semaphores, and shared segments. Like BSD mmap
, System V IPC uses files to identify
shared segments. Unlike BSD, System V uses these files only for
naming. Their contents have nothing to do with the initialization
of the shared segment. IPC data structures are created
dynamically when a process requests an IPC Resource, i.e. a
semaphore, a message queue, or a shared memory segment. All of
these IPC Resources would be discussed in detail later on. Before
we dive deep into the subject matter, there are a few things that
I would like to explain at the very beginning. They are as
follows:
Application programmers have a variety of needs that call for different communication mechanisms. Some of the basic mechanisms that UNIX systems, GNU/Linux is particular has to offer are:
Another commonly used data communication mechanism in networks, "Sockets" will NOT be discussed here since it requires a long discussion of networking. In this article, we will explore all the above-mentioned IPC mechanisms and System V IPC facilities at our disposal.
In this section, I would like to discuss in minute detail two inter-process communication mechanisms, "Pipes" first, and then "FIFOs" later on. Readers must try to note the difference between an "Inter-process Communication Mechanism" and an "Inter-process Communication Resource/Facility", though it is very difficult to draw a line between them and differentiate between them. Pipes and FIFOs are "Inter-process Communication Mechanisms" while semaphores, message queues and shared memory segments are "Inter-process Communication Resources". The best way to remember the difference between the two is: Inter-process Communication Mechanisms emphasize "how and why" data communication occurs between two User Mode processes, while on the other hand, Inter-process Communication Resources define the same objective, but in a more polished manner, by implementing the functionality through programming interfaces (and most of the times, using rather complex ones!). This is the reason why a discussion on pipes and FIFOs is incomplete without a discussion on semaphores, message queues and shared memory segments.
a) Pipes: Let's start with Pipes first of all. Pipes are an inter-process communication mechanism that is provided in all flavors of UNIX. A "pipe" defines one-way flow of data between processes. All data written to a pipe by a program is routed by the Kernel to another process, which can then access it and read the data. In UNIX command shells, pipes can be created by means of the '|' operator. For example, consider this shell command:
# cmd1 | cmd2
The shell arranges the standard input and output of the two commands as follows:
cmd1
comes from the terminal keyboard.cmd1
is fed to cmd2
as its
standard input.cmd2
is connected to the terminal screen.What the shell does here is: It reconnects the standard input and output streams so that data flows from the keyboard input through the two commands and is then output to the screen. This is how pipes function. Okay, now that we now what exactly a pipe is and have an idea how it works, the next big question is: How on earth do we create a "Pipe" programmatically on a Unix system?
On Unix systems, pipes may be considered open files that have
no corresponding image in the mounted filesystems. A new pipe can
be created by means of the pipe()
system call, which returns a pair of file descriptors. The
process can read from the pipe by using read()
system call with the first file descriptor, and write into the
pipe by using the write()
system call
with the second file descriptor. Now, pipes can be implemented in
different ways on different systems. POSIX
defines only "half-duplex" pipes. In
"half-duplex" pipes, the pipe()
system call does return two file descriptors, but each process
must close one before using the other. Thus, if a two-way data
flow is required one must use two different pipes by invoking the
pipe()
system call twice. This is how
"half-duplex" pipes work. On other Unix systems, such
as System V Release 4 (SVR4) Unix, pipes are
implemented in a "full-duplex" manner. Full duplex allows
both descriptors to be written into and read from at the same
time. GNU/Linux, on the other hand, implements pipes in another
unique manner. On Linux systems (that is GNU
systems with Linux as the core - Kernel), pipe's file descriptors
are one-way, but it is NOT necessary to close one of them before
using the other. In this article, we will be dealing with POSIX
style "half-duplex" pipes. (Reason: Linux uses
"half-duplex" pipes but in a special way. Thus,
"half-duplex" pipes will be covered.) Okay, enough
said about "pipes"! Let's get going and see how we can
create pipes on Unix Systems (POSIX style implemented)
programmatically. The pipe function has the prototype:
#include <unistd.h>
int pipe (int file_descriptor[2]);
pipe
is passed (a pointer to) an
array of two integer file descriptors. It fills the array with
two new file descriptors and returns a zero. On failure, pipe
system call returns -1
. The errors
defined in Linux man pages are:
EMFILE
: Too many file descriptors are in use by the process.
ENFILE
: The system file table is full.
EFAULT
: The file descriptor is not valid.
The point to note here is that two file descriptors are
returned and though distinct, are connected in a special way. Any
data written to file_descriptor[1]
can be read back from file_descriptor[0]
.
The data is processed in a first in, first out basis, usually
referred to as FIFO. This means that if you write the bytes 2, 3,
4 to file_descriptor[1]
, reading from
file_descriptor[0]
will produce
exactly 2, 3, 4. Readers must note: This is entirely different
from the operation of a stack, which works in a last in, first
out (LIFO) basis. The real advantage of pipes comes when one
wishes to pass data between two processes. In the program given
below the program creates a pipe using the pipe
system call. It then uses fork
call
to create a new process. If the fork
call is successful, the parent writes data into the pipe, while
the child reads data from the pipe. Both parent and child
processes exit after a single write and read. The readers must
note that if in case the parent exits before the child, they
might see the shell prompt between the two outputs. The source
code for our program prog1
is as
given below:
/* Pipes across a fork:
By: Subhasish Ghosh
Date: 15th August 2001
Place: Calcutta, WB, India
E-mail:subhasish_ghosh@linuxmail.org
*/
#include <unistd.h>
#include <stdlib.h>
#include <stdio.h>
#include <string.h>
int main()
{
int data_processed;
int file_pipes[2];
const char some_data[] = "123";
char buffer[BUFSIZ + 1];
pid_t fork_result;
memset(buffer, '\0', sizeof(buffer));
if (pipe(file_pipes) = = 0)
{
fork_result = fork();
if (fork_result = = -1)
{
fprintf(stderr, "Fork Failure");
exit(EXIT_FAILURE);
}
if (fork_result = = 0)
{
data_processed = read(file_pipes[0], buffer, BUFSIZ);
printf("Read %d bytes: %s\n", data_processed, buffer);
exit(EXIT_SUCCESS);
}
else
{
data_processed = write(file_pipes[1], some_data, strlen(some_data));
printf("Wrote %d bytes\n", data_processed);
}
}
exit(EXIT_SUCCESS);
}
After typing it in, save the file and compile it using:
#cc -o prog1 prog1.c
and then execute it using:
#./prog1
The output is as given below:
Wrote 3 bytes
Read 3 bytes: 123
Readers should note, when a program creates a new process using
the fork
system call, file
descriptors that were previously open remain open. By creating a
pipe in the original process and then forking to create a new
process, we can pass data from one process to the other down the
pipe. This is how an ordinary pipe works. Let's now take a
detailed look of the Pipe Data Structures in GNU/Linux. When we
start thinking on the system call level, once a pipe has been
created, a process uses the read()
and write() VFS (Virtual FileSystem)
system calls to access it. Therefore, for each pipe, the Linux
kernel creates an inode
object plus
two file objects, one for reading and the other for writing. When
a process wants to read or write to the pipe (NOT both together,
in POSIX and Linux), it must use the proper file descriptor. When
the inode object refers to the pipe, its u
field consists of a pipe_inode_info
data structure. The pipe_inode_info
data structure has the following fields:
Type | Field | Description |
char * |
base |
Address of Linux Kernel buffer |
unsigned int |
start |
Read position in Linux Kernel
buffer |
unsigned int |
lock |
Locking flag utilized for
exclusive access |
struct wait_queue * |
wait |
Pipe/FIFO wait queue |
unsigned int |
readers |
Flag for reading processes |
unsigned int |
writers |
Flag for writing processes |
unsigned int |
rd_openers |
Used while opening a FIFO for
reading |
unsigned int |
wr_openers |
Used while opening a FIFO for
writing |
Also each pipe has its own pipe buffer. A 'pipe
buffer' may be defined as a single page frame containing
the data written into the pipe, yet to be read. The address of
this page frame is stored in the 'base
'
field of the pipe_inode_info
data
structure. Okay, a question that now comes up is: What about
'race conditions'? (For definition of this term and other
associated terms, readers are requested to read the earlier
articles in the series.) How does a pipe avoid race conditions on
the pipe's data structures? To avoid race conditions on the
pipe's data structures, the Linux kernel forbids concurrent
accesses to the pipe buffer. This brings into play the 'lock
' field in the pipe_inode_info
data structure. Is that all? No, definitely NOT. The lock
field in the pipe_inode_info
data structure is not enough to handle complex situations. POSIX
comes to the rescue (like a 'Hero' in a movie and saves the
day!). Thus the POSIX standard allows the writing of processes to be
suspended when the pipe is full, so that readers can empty the
buffer. These requirements are met by utilizing the functionality
of an additional i_atomic_write
semaphore that can be found in the inode
object. i_atomic_write
semaphore
suspends a write operation till the buffer is full. The process
that issues a pipe()
system call is
initially the only process that can access the new pipe, both for
reading and writing. To represent that the pipe has both a reader
and a writer, the 'readers
' and 'writers
' fields of the pipe_inode_info
data structure are initialized to 1. It is very vital that all
readers (please note: I mean all the people reading this article)
must note that the "readers
"
and "writers
" fields in the
pipe_inode_info
data structure have a
different functionality when applied to "pipes" and
"FIFO". The readers and writers act as flags when
applied to pipes, and as "counters", NOT
"flags", when associated with FIFOs. Now that we have
seen what a "pipe" is, what it does, how it operates
including a sample program, let's look into pipes in more minute
detail.
Creating and Destroying a Pipe: A pipe
is implemented as a set of VFS objects. The point to note is: A
pipe remains in the system as long as some process owns a file
descriptor referring to it. When the low-level pipe()
system call is used, the pipe()
call is serviced by the sys_pipe()
function. sys_pipe()
function in turn invokes the do_pipe()
function. In order to create a new pipe, the do_pipe()
function performs the following operations:
flag
" field of the file object
to O_RDONLY, and then initializes the f_op
field with the address of the read_pipe_fops
table. flag
" field of the file object
to O_WRONLY, and then finally initializes the f_op
field with the address of the write_pipe_fops
table.get_pipe_inode()
function, which allocates and initializes an inode object
for the pipe. get_pipe_inode()
also allocates a page frame for the pipe buffer and
stores its address in the "base" field of the pipe_inode_info
data structure
(mentioned above).So, everytime, one issues a pipe()
system call, these above-mentioned five steps are carried out
automatically, thereby creating a new pipe. Now, let's look at how a
pipe can be destroyed. Whenever a process invokes the close()
system call on a file descriptor
associated with a pipe, the Linux kernel executes the fput()
function on the corresponding file
object, which decrements the usage counter. If the counter
becomes zero, the function invokes the 'release' method of the
file operations. Both the pipe_read_release()
and pipe_write_release()
functions
are used to implement the 'release' method of the pipe's file
objects. They set to 0 the 'readers' and 'writers' fields,
respectively, of the pipe_inode_info
data structure. Then, each function invokes the pipe_release()
function. This function, when
invoked, wakes up any process(s) sleeping in the pipe's wait
queue so that they can recognize the change in pipe state. It
then checks whether both the 'readers' and 'writers' fields of
the pipe_inode_info
data structure
are equal to 0; if yes, in this case only, it releases the page
frame containing the pipe buffer. So, this is the summary of all
the various things that take place within the Linux Kernel
everytime a pipe is created and later destroyed. Interesting
enough, right? Let's now move on to the next interesting section,
FIFOs.
FIFO files are similar to device files. They have a disk
inode, but they do not make use of data blocks. FIFOs are similar
to unnamed pipes in that they also include a kernel buffer to
temporarily store the data exchanged by two or more processes.
Now the question is, how to create a "named pipe"? We
can create named pipes from the command line and from within a
program. But the creation of named pipes can be a bit confusing.
But don't worry, read on. A process creates a FIFO by issuing a mknod()
system call,
passing to it as parameters the pathname of the new FIFO and the
value S_IFIFO
(0x1000) logically ORed
with the permission bit mask of the new file. But there's a
problem in this. The mknod()
system
call is NOT in the X/Open command list, so may
not be available on all UNIX systems. Thus, POSIX introduces a
system call called mkfifo()
,
specifically designed to create a FIFO. This call is implemented
in GNU/Linux, as in System V Release 4 (SVR4), as a C library
function that invokes mknod()
. The
preferred command line method is to use:
# mkfifo filename
Note: All readers should note that some
older versions of UNIX only have the mknod()
command. X/Open Issue 4 Version 2 has the mknod
function call, but NOT the command. GNU/Linux supports both mknod
and mkfifo
.
(See! I always say: Linux is the best!) Let's now take a look at
the functions. These are:
#include <sys/types.h>
#include <sys/stat.h>
int mkfifo (const char* filename, mode_t mode);
int mknod( const char* filename, mode_t mode | S_IFIFO, (dev_t)0);
Since I like dealing with POSIX standards, and also as Linux
supports mkfifo
, we will be using the mkfifo()
system call in this article from
now on, instead of mknod()
. Let's now
create a 'named pipe' and see what's in store for us. So, open
vi, type in this source code, and save the file. The code:
/* Creating a Named Pipe:
By: Subhasish Ghosh
Date: August 17th 2001
Place: Calcutta, WB, INDIA
E-mail:subhasish_ghosh@linuxmail.org
*/
#include <unistd.h>
#include <stdlib.h>
#include <stdio.h>
#include <sys/types.h>
#include <sys/stat.h>
int main()
{
int res = mkfifo("/tmp/subhasish", 0777);
if (res = = 0) printf ("FIFO created successfully\n");
exit (EXIT_SUCCESS);
}
When we compile and run this program, it creates a named pipe for us (look out for the output "FIFO created successfully" on the screen). We can look for the pipe with the command:
# ls -lF /tmp/subhasish
prwxr-xr-x 1 root root 0 Dec 10 14:55 /tmp/subhasish |
Notice that the first character is a 'p',
indicating a pipe. The '|' symbol at the end is
added by the ls
command's -F option
and also indicates a pipe. The program uses the mkfifo()
function to create a special file.
Although we have asked for the mode of 0777, this is altered by
the user mask (umask
) setting of 022,
and thus the resulting file has mode 755. For removing the FIFO,
just issue a rm
command or use the
"unlink
" system call from
within a program. Unlike a pipe created with the pipe()
system call, a FIFO exists as a
named file, not as an open file descriptor. Thus it must be
opened before it can be read from or written to. One opens and
closes a FIFO using the open()
and close()
functions, but with some additional
functionality. The open()
call is
passed the path name of the FIFO, rather than of a regular file.
Now, for opening a FIFO, we have four possible cases. Let's look
into the matter in a bit more detail.
If we wish to pass data in both directions between programs,
it's much better to use either a pair of FIFOs or pipes, one for
each direction. The main difference between opening a FIFO and a
regular file is the use of the open_flag
(the second parameter to open
), with
the option O_NONBLOCK. There exist 4 possible (and definitely
legal) combinations of O_RDONLY, O_WRONLY and the O_NONBLOCK
flags. Let's see what each combination has to offer:
1) open (const char *path, O_RDONLY);
2) open (const char *path, O_RDONLY | O_NONBLOCK);
3) open (const char *path, O_WRONLY);
4) open (const char *path, O_WRONLY | O_NONBLOCK);
In case of 1), the open
call will
block, in other words, will not return until a process opens the
same FIFO for writing.
In case of 2), the open
call will
return immediately, even if the FIFO had not been opened for
writing by any process.
In case of 3), the open
call will
block until a process opens the same FIFO for reading.
In case of 4), the open
call will
return immediately, but if no process has the FIFO for reading,
open will return an error, -1.
At this point, all readers should notice a very important thing:
That is the asymmetry between the use of
O_NONBLOCK with O_RDONLY and O_WRONLY. A non-blocking open
for writing fails if no process has
the pipe open for reading. But a non-blocking read
doesn't fail. So, now that we have
seen what a FIFO is, what it does, how it operates and the
associated functions, let's sum up this section by getting some
code up and running.
A few things that need to be mentioned here are:
Using the O_NONBLOCK mode affects how read()
and write()
calls behave on FIFOs. A
read on an empty blocking FIFO will wait until some data can be
read. Conversely, a read on a non-blocking FIFO with no data will
return 0 bytes. A write on a full blocking FIFO will wait until
the data can be written. A write on a FIFO that can't accept all
of the bytes being written will, in most cases, write part of the
data if the request is more than PIPE_BUF
bytes. Another very important point to note here is the size of a
FIFO. There is a "system-imposed"
limit on how much data can be within a FIFO at a given instant of
time. This is the #define PIPE_BUF
which is usually found in limits.h
.
On Linux systems, this is commonly 4096
bytes. On other UNIX systems this value may be
different. To illustrate how unrelated processes can communicate
using named pipes, we create two programs ghosh1.c
and ghosh2.c
, where the first program
is the "producer", cause this creates the pipe, if
required, then writes data to it as quickly as possible; the
second program is the "consumer". It reads and discards
data from the FIFO. So, get moving, and create these two files as
given below.
ghosh1.c : The "producer":
/* Inter-process Communication with FIFOs:
Note: For illustration purposes, we don't mind what the data is, so we don't bother to initialize buffer.
By: Subhasish Ghosh
Date: August 17th 2001
Place: Calcutta, WB, India
E-mail:subhasish_ghosh@linuxmail.org
*/
#include <unistd.h>
#include <stdlib.h>
#include <stdio.h>
#include <string.h>
#include <fcntl.h>
#include <limits.h>
#include <sys/types.h>
#include <sys/stat.h>
#define FIFO_NAME "/tmp/subhasish"
#define BUFFER_SIZE PIPE_BUF
#define TEN_MEG (1024 * 1024 * 10)
int main()
{
int pipe_fd;
int res;
int open_mode = O_WRONLY;
int bytes_sent = 0;
char buffer[BUFFER_SIZE + 1];
if (access(FIFO_NAME, F_OK) = = -1)
{
res = mkfifo(FIFO_NAME, 0777);
if (res != 0)
{
fprintf (stderr, "Could not create fifo %s\n", FIFO_NAME);
exit (EXIT_FAILURE);
}
}
printf ("Process %d opening FIFO O_WRONLY\n", getpid());
pipe_fd = open(FIFO_NAME, open_mode);
printf ("Process %d result %d\n", getpid(), pipe_fd);
if (pipe_fd != -1)
{
while (bytes_sent < TEN_MEG)
{
res = write (pipe_fd, buffer, BUFFER_SIZE);
if (res = = -1)
{
fprintf (stderr, "Write error on page\n");
exit (EXIT_FAILURE);
}
bytes_sent += res;
}
(void)close(pipe_fd);
}
else
{
exit(EXIT_FAILURE);
}
printf ("Process %d finished\n", getpid());
exit (EXIT_SUCCESS);
}
ghosh2.c : The "consumer":
/* The consumer program for program ghosh1:
By: Subhasish Ghosh
Date: August 17th 2001
Place: Calcutta, WB, India
Email:subhasish_ghosh@linuxmail.org
*/
#include <unistd.h>
#include <stdlib.h>
#include <stdio.h>
#include <string.h>
#include <fcntl.h>
#include <limits.h>
#include <sys/types.h>
#include <sys/stat.h>
#define FIFO_NAME "/tmp/subhasish"
#define BUFFER_SIZE PIPE_BUF
int main()
{
int pipe_fd;
int res;
int open_mode = O_RDONLY;
char buffer[BUFFER_SIZE + 1];
int bytes_read = 0;
memset (buffer, '\0', sizeof(buffer));
printf ("Process %d opening FIFO O_RDONLY\n", getpid());
pipe_fd = open(FIFO_NAME, open_mode);
printf ("Process %d result %d\n", getpid(), pipe_fd);
if (pipe_fd != -1)
{
do
{
res = read(pipe_fd, buffer, BUFFER_SIZE);
bytes_read += res;
} while (res > 0);
(void) close(pipe_fd);
}
else
{
exit (EXIT_FAILURE);
}
printf ("Process %d finished, %d bytes read\n", getpid(), bytes_read);
exit (EXIT_SUCCESS);
}
After creating the files, save them, compile as usual, and then all we need to do is to use the "time" command to time the reader. Thus both the programs run at the same time. We get an output similar to this:
# ./ghosh1 &
[1] 380
Process 380 opening FIFO O_WRONLY
# time ./ghosh2
Process 382 opening FIFO O_RDONLY
Process 380 result 3
Process 382 result 3
Process 380 finished
Process 382 finished, 10485760 bytes read
real 0m0.049s
user 0m0.010s
sys 0m0.040s
[1]+ Done ghosh1
As you might notice, both programs use the FIFO in
blocking mode. We start ghosh1
(the
writer/producer) first, which blocks, waiting for some reader to
open the FIFO. When ghosh2
(the
consumer) starts, the writer is then unblocked and starts writing
data to the pipe. At the same time, the reader starts reading
data from the pipe. This simple pair of programs illustrates the
immense potential and beauty of FIFOs.
Okay, now I have one question for all the readers reading this
article. It is simple, but I am asking this question so that
everyone out there can relate to different facts relating to a
FIFO. My question is: Can anyone tell me, when a process opens a
FIFO, what role does VFS (Virtual Filesystem) play at that time?
The answer is: When a process opens a FIFO, the VFS performs the
same operations as it does for device files. The inode object
associated with the opened FIFO is initialized by a
filesystem-dependent read_inode
superblock method. This method always checks whether the inode
on disk represents a FIFO. The code
snippet maybe represented as below:
if ( inode-> i_mode & 00170000) = = S_IFIFO)
init_fifo (inode);
The init_fifo()
function sets the i_op
field of the inode
object to the address of the fifo_inode_operations
table. The function
also initializes to 0 all the fields of the pipe_inode_info
data structure stored inside the inode object (for pipe_inode_info
data structure, please
refer to the section earlier where I explained the structure
while explaining pipes). I also did mention while explaining
pipes the peculiar behavior of the fields of the pipe_inode_info
data structure, that they
act as "flags" when dealing with pipes, and
"counters" when dealing with FIFOs. The code snippet
mentioned above that sets the "counter-behavior" of the
fields within the pipe_inode_info
data structure is the justification of what I had mentioned
earlier. This is the beauty of the Linux Kernel, where different
entities play different roles when the time comes for them to play
their roles. And thus it takes a lot of passion and hard work on
behalf of Kernel hackers to identify when and why they act in
such a manner. Everything we have covered here in this section on
FIFOs is nearly nothing compared to all that really exists. Anyway, I really
hope everyone had a lot of fun reading it (cause I had a lot of
fun writing it!). So, after dealing with pipe and FIFO, it's high
time that we look into the different System V IPC facilities
available. So, let's move on to the next section and have some
more fun! Hmmmm...
About the Author: My name is Subhasish Ghosh. I'm 20 years old, currently a computer-systems engineering student in India; a Microsoft Certified Professional (MCP), MCSD, MCP certified on NT 4.0, recently completed Red Hat Linux Certified Engineer (RHCE) Training & cleared Brainbench.com "Linux General Administration" certification exam. Have been installing, configuring and developing on Linux patform for a long time now, have had programmed using C, C++, VC++, VB, COM, DCOM, MFC, ATL 3.0, PERL, Python, POSIX Threads and Linux Kernel programming; currently holding a total of 8 International Industry Certifications. For a list of all my articles at Linux.com (and other sites), click here.