. | . | . | . | David McCracken |
Linux App Program Examplesupdated:2016.07.13 |
To learn Linux I set a course for myself to first learn bash followed by grub and other installation topics and then C/C++ programming in the Linux environment. I was already an experienced C/C++ programmer but I didn’t want to simply recycle my old code so I made up some new assignments. These are all console applications. Some are general and would compile and run equally well in Windows. Others are specific to Linux.
I used GCC to build the programs and GDB to debug. Except for the Linux IPC (Inter-Process Communication) programs, my primary goal was to learn GDB, as I already had considerable experience with GCC in embedded programming. I used GDB from the command line, from within emacs, and as an Eclipse plugin. The emacs-gdb combination is by far the most interesting environment. It starts up in seconds and has the potential to be a more all-encompassing environment than Eclipse.
The following examples represent less than 10% of my Linux-specific network and system programming assignments. They are not commercial programs but my own self-tutorials. Consequently, they are small, self-contained, and don’t require much context explanation. I have moved some of the comment blocks from the code into text paragraphs for clarity. In addition to the select, sockets, service, System V semaphore, Posix semaphore, and Posix threads programs shown here, I wrote similar programs for pipes, fifo, signal (sigemptyset, sigfillset, sigaction, sigprocmask, and pause), stat, lstat, and segment fault. The latter primarily is more of a special debugging lesson. The program just deliberately causes a segmentation fault in order to exercise ulimit -c unlimited, core dumps, and GDB.
myselect.cpp #include <sys/types.h> #include <sys/time.h> #include <stdio.h> #include <fcntl.h> #include <sys/ioctl.h> #include <unistd.h> #include <stdlib.h> int main( int argc, char **argv ) { char buf[ 128 ]; int nread; fd_set inputs; fd_set testfds; struct timeval tv; FD_ZERO( &inputs ); FD_SET( 0, &inputs ); // Add stdin to inputs fd_set while(1) { testfds = inputs; // Reinitialized every time because select modifies it. tv.tv_sec = 2; tv.tv_usec = 500000; // 2.5 seconds switch( select( 1, &testfds, 0, 0, &tv )) { case 0: printf( "timeout\n" ); break; case -1: perror( "select" ); exit(1); default: if( FD_ISSET( 0, &testfds )) { nread = read( 0, buf, 100 ); if( nread == 0 ) { printf( "keyboard done\n" ); exit(0); } buf[ nread ] = 0; printf( "read %d from keyboard: %s", nread, buf ); } break; } } return 0; }
myselect.cpp demonstates the Linux select function in the simple case of reading from stdin (file descriptor 0). This shows how to prepare an fd_set and wait for activity with timeout.
I have seen some examples where ioctl is called with FIONREAD on stdin, without
explanation. I assume that the purpose is to avoid blocking in read when the
user presses ^D, but it isn’t needed. We can just read and, if 0, the user has
closed stdin. That is what I do when FD_ISSET( 0, &testfds )
is true.
mysock.cpp #include <sys/types.h> #include <sys/socket.h> #include <stdio.h> #include <sys/un.h> #include <unistd.h> #include <stdlib.h> #include <ctype.h> #include <netdb.h> // IPPORT_RESERVED #include <arpa/inet.h> // inet_aton #include <signal.h> // Needed for f (fork) option. #include <fcntl.h> // for fcntl function to set socket non-blocking. #include <errno.h> // for EAGAIN #include <sys/ioctl.h> int sockDomain = AF_UNIX; int sockType = SOCK_STREAM; bool useFork = false; bool useSelect = false; bool delay = false; // Turn this on for clients that start immediately // after server, e.g. on the same command line, to give the server time // to get ready for them. Otherwise "connection refused". char unSockFile[] = "serverSocket"; char inServIp[] = "127.0.0.1"; uint16_t port = IPPORT_RESERVED + 100; class SockAddr { public: socklen_t len; union { struct sockaddr *a; struct sockaddr_un *u; struct sockaddr_in *i; } sa; SockAddr( bool init ); ~SockAddr() { free( sa.a ); } }; SockAddr::SockAddr( bool init ) { len = sockDomain == AF_UNIX ? sizeof( sockaddr_un ) : sizeof( sockaddr_in ); sa.a = (sockaddr*)malloc( len ); if( init ) { if( sockDomain == AF_UNIX ) { sa.u->sun_family = AF_UNIX; strcpy( sa.u->sun_path, unSockFile ); } else { sa.i->sin_family = AF_INET; //printf( "Using port %d\n", port ); sa.i->sin_port = htons( port ); inet_aton( inServIp, &sa.i->sin_addr ); //sa.i->sin_addr.s_addr = inet_addr( inServIp ); //sa.i->sin_addr.s_addr = htonl( INADDR_ANY ); } } } void sleepMs( unsigned long ms ) { struct timeval tv = { 0, ms * 1000 }; select( 0, 0, 0, 0, &tv ); } int servResponse( int fd ) { char ch; printf( "Begin servResponse\n" ); read( fd, &ch, 1 ); // Delay to demonstrate piling up of clients if( useFork ) sleepMs( 500 ); ch++; write( fd, &ch, 1 ); sleepMs( 100 ); // Without this, if the server is // restarted client connect is refused. close( fd ); printf( "End servResponse\n" ); return 0; } void servLoop( int servfd ) { SockAddr cliAddr( false ); int clifd; while( 1 ) { printf( "Server waiting\n" ); do { clifd = accept( servfd, cliAddr.sa.a, &cliAddr.len ); } while( clifd == -1 ); if( useFork ) { if( fork() == 0 ) // child { servResponse( clifd ); exit(0); // child exits. } else close( clifd ); // Parent closes its own handle. } else servResponse( clifd ); } } void selectServ( int servfd ) { SockAddr cliAddr( false ); // Client socket address. // Uninitialized except for length. int clifd; // Client socket file descriptor fd_set readfds; fd_set testfds; char ch; int maxfd = servfd; // This only grows. It is too // much trouble to reduce as clients close. FD_ZERO( &readfds ); FD_SET( servfd, &readfds ); while( 1 ) { testfds = readfds; // Copy the template to the working set, // which select will change. printf( "Server waiting\n" ); if( select( maxfd + 1, &testfds, 0, 0, 0 ) < 1 ) { perror( "select server" ); exit(1); } for( int fd = 3 ; fd <= maxfd ; fd++ ) if( FD_ISSET( fd, &testfds )) { if( fd == servfd ) { // Activity on the server socket means only one thing; a new client // has connected. Create a socket for it and add this to the file // set template. Also bump up maxfs if the new socket's descriptor is higher. do { clifd = accept( servfd, cliAddr.sa.a, &cliAddr.len ); } while( clifd == -1 ); FD_SET( clifd, &readfds ); if( clifd > maxfd ) maxfd = clifd; printf( "adding client on fd %d\n", clifd ); } // fd == servfd else // must be on a client socket { if( read( fd, &ch, 1 ) < 1 ) { close( fd ); FD_CLR( fd, &readfds ); printf( "removing client on fd %d\n", fd ); // Note that maxfd doesn't shrink even if it's this one. } else { sleepMs( 500 ); // To demonstrate client pile up. ch++; write( fd, &ch, 1 ); } } // client } // if activity on this fd value. } // while(1) } int server( void ) { SockAddr sockAddr( true ); // Create socket address for // connections. This is initialized with server information. // The client is initialized identically i.e. with server information. int fd; // socket file descriptor. if( sockDomain == AF_UNIX ) unlink( unSockFile ); if( useFork ) signal( SIGCHLD, SIG_IGN ); fd = socket( sockDomain, sockType, 0 ); if( fd == -1 ) { perror( "Server" ); return 2; } bind( fd, sockAddr.sa.a, sockAddr.len ); listen( fd, 5 ); // Doesn't block ( useSelect ? selectServ : servLoop )( fd ); close( fd ); return 0; } int client( void ) { SockAddr sockAddr( true ); // Create server access // socket address. Note that this is initialized with the // same information as the server. int fd = socket( sockDomain, sockType, 0 ); if( fd == -1 ) { perror( "Client" ); return 2; } // Delay to give the server a chance to get ready to receive // connections in tests where client is started immediately after // server, e.g. in the same command line. If multiple clients // are started in this condition, all must be delayed. if( delay ) sleepMs( 100 ); // connect blocks for some unspecified time if the // connection can't be established immediately. This can be // changed by fcntl O_NONBLOCK. if( connect( fd, sockAddr.sa.a, sockAddr.len ) == -1 ) { perror( "client" ); return 1; } char ch = 'A'; // printf( "Client is writing\n" ); write( fd, &ch, 1 ); // printf( "Client is reading\n" ); read( fd, &ch, 1 ); printf( "** char from server is %c **\n", ch ); close( fd ); return 0; } bool isHelpReq( char *arg ) { char* cp; if( *arg == '?' ) return true; for( cp = arg ; ispunct( *cp ) ; cp++ ) ; return strcasecmp( cp, "H" ) == 0 || strcasecmp( cp, "HELP" ) == 0; } int main( int argc, char **argv ) { char launch = 0; if( argc < 2 || isHelpReq( argv[1])) { printf( "Usage: sock s|c[uipfe]\n" ); printf( "Required: s (server) or c (client)\n" ); printf( "Option: u|i unix (default) or internet sockets\n" ); printf( "Option: f|e server uses fork or select (default inline) response\n" ); printf( "Option: p child pauses for server before connecting\n" ); printf( "e.g. ./mysock sie & ./mysock cip & ./mysock cip & ./mysock cip\n" ); return 1; } for( int idx = 0 ; argv[1][ idx ] != 0 ; idx++ ) { switch( toupper( argv[1][ idx ])) { case 'S': launch = 'S'; break; case 'C': launch = 'C'; break; case 'I': sockDomain = AF_INET; break; case 'U': sockDomain = AF_UNIX; break; case 'F': useFork = true; break; case 'E': useSelect = true; break; case 'P': delay = true; // For clients that start // immediately after server. break; } } if( launch == 0 ) { printf( "Error: c[lient] or s[erver] unspecified\n" ); return 1; } return launch == 'S' ? server() : client(); }
mysock.cpp demonstrates Linux sockets. Command line options (processed by main) are all a single case-insensitive letter. Multiple options are combined to present one command-line argument.
The server is normally invoked to run in background and can only be stopped by killall mysock (easier and better than kill PID). e.g.
./mysock si &
./mysock ci
./mysock sif &
./mysock ci
./mysock sue &
./mysock cu
The fork and select server forms support multiple simultaneous client connections. Use the following command lines to demonstrate:
./mysock sif & ./mysock cip & ./mysock cip &
./mysock cip & ps x | grep mysock
./mysock sie & ./mysock cip & ./mysock cip &
./mysock cip & ps x | grep mysock
./mysock suf & ./mysock cup & ./mysock cup &
./mysock cup & ps x | grep mysock
./mysock sue & ./mysock cup & ./mysock cup &
./mysock cup & ps x | grep mysock
Note that every client has p option, instructing it to sleep for 100ms before
trying to connect. The server needs this time to get ready. Clients can start
without delay if server is already running or if there is a delay between
starting the server and the clients. This can’t be done on the command line
because bash doesn’t accept sleep command after launching a program in bg,
e.g. ./mysock sif & sleep 1
is rejected even though
./mysock sif sleep 1
is accepted.
To test without the client delays, use a script, e.g.
./mysock sie & sleep 1
./mysock ci & ./mysock ci & ./mysock ci & ps x | grep mysock
read # Get user input to continue to avoid premature killing of delayed server.
if ps x | grep mysock ; then killall mysock ; fi
ps option x (not -x) shows processes w/o controlling TTYs. Coincidentally, this
also shows the complete command line, e.g. mysock sif
or
mysock ci
, which is useful since mysock alone doesn’t distinguish
between server and client. Option "C mysock" could reduce ps output clutter by
showing only mysock processes but that shows only mysock without command line.
Display (with added comments):
The multi-client fork test output is similar to:
[David$ ~/test] ./mysock sif & ./mysock ci & ./mysock ci & ./mysock ci & ps x | grep mysock
[1] 3380 # server PID
Server waiting
[2] 3381 # first child (not child process) PID
Begin servResponse # child process after accept returns
Server waiting
[3] 3383 # second child PID
[4] 3384 # third child PID
Server waiting # server parent loops after forking (child process is delayed)
Begin servResponse # second child process apparently delayed relative to parent
Begin servResponse # third child process
Server waiting # server parent loops after forking third child process
3380 pts/0 S 0:00 ./mysock sif # server parent
3381 pts/0 S 0:00 ./mysock ci # first child
3382 pts/0 S 0:00 ./mysock sif # first server child process
3383 pts/0 S 0:00 ./mysock ci # second child
3384 pts/0 S 0:00 ./mysock ci # third child
3386 pts/0 S+ 0:00 grep mysock
3387 pts/0 S 0:00 ./mysock sif # second server child process
3388 pts/0 S 0:00 ./mysock sif # third server child process
[David$ ~/test] char from server is B # from child one
char from server is B # from child two
char from server is B # from child three
End servResponse # first server child
End servResponse # second server child
End servResponse # third server child.
# stops here until Enter
[2] Done ./mysock ci
[3]- Done ./mysock ci
[4]+ Done ./mysock ci
[David$ ~/test]
# ps x at this point shows mysock sif, the server parent.
The multi-client select test output is:
[David$ ~/test] ./mysock sie & ./mysock ci & ./mysock ci & ./mysock ci & ps x | grep mysock
[1] 3540 # server
[2] 3541 # first child
[3] 3542 # second child
Server waiting # Apparently the OS let the first two clients start before letting server continue.
adding client on fd 4 # response to first child connect
Server waiting
[4] 3543 # third child
3540 pts/0 S 0:00 ./mysock sie
3541 pts/0 S 0:00 ./mysock ci
3542 pts/0 S 0:00 ./mysock ci
3543 pts/0 S 0:00 ./mysock ci
3545 pts/0 S+ 0:00 grep mysock
Server waiting
adding client on fd 5 # response to second child connect
Server waiting
adding client on fd 6 # response to third child connect
char from server is B # from first child
Server waiting
removing client on fd 4
char from server is B # from second child
Server waiting
removing client on fd 5
Server waiting
char from server is B # from third child
removing client on fd 6
Server waiting
# stops here until Enter
[2] Done ./mysock ci
[3]- Done ./mysock ci
[4]+ Done ./mysock ci
[David$ ~/test]
The socket API considers the ID returned by the socket function to be the socket’s descriptor. Underlying structures describing the socket are protocol-specific. The socket API functions getsockopt and setsockopt access configuration information generically. In the case of TCP, the relevant structure is a TCB (Transmission Control Block). The socket function creates a socket of a type specified in the domain (aka "protocol family"), type, and protocol arguments passed to it.
In a program, the ID returned by the socket function is all that’s needed to identify the socket but, across the network, a socket must be named. The name is not passed to the socket function but is attached later. The name is an address, the form of which depends on the domain.
For AF_UNIX with client and server on the same computer, the address is a file. This is not a real file. ls shows it as e.g. srwxrwxr-x 1 David David 0 2009-09-15 10:28 serverSocket The s attribute identifies it as socket. The server attaches this name to a socket by calling bind. The client attaches the same name to its socket when it calls connect. accept creates the server’s client socket and assigns 1 to the given addr.sun_family but nothing to addr. sun_path, which never gets assigned.struct sockaddr_un is defined in sys/un.h.
The address is specified in a struct sockaddr_in, which is defined in netinet/in.h.It comprises: int sin_family = AF_INET USHORT sin_port = port number, which should be above IPPORT_RESERVED (defined in netdb.h) or a standard, such as 80 for http. struct in_addr sin_addr dotted address, which can be created by inet_addr, e.g. inet_addr( "127.0.0.1") for localhost (loopback).
In some examples inet_addr is used to convert dotted name string to in_addr_t
but man says that this is not reliable because it returns -1 as error but -1 is
also the legitimate address 255.255.255.255, which broadcasts to every host.
man says to use inet_aton instead.
int inet_aton(const char *cp, struct in_addr *inp);
in_addr_t inet_addr(const char *cp);
/usr/include/netinet/in.h defines both the in_addr_t returned by inet_addr and
the struct in_addr argument to inet_aton.
typedef uint32_t in_addr_t in
struct in_addr
{
in_addr_t s_addr;
};
in.h also defines:
struct sockaddr_in
{
__SOCKADDR_COMMON (sin_);
in_port_t sin_port; Port number.
struct in_addr sin_addr; Internet address.
Pad to size of `struct sockaddr'.
unsigned char sin_zero[sizeof (struct sockaddr) -
__SOCKADDR_COMMON_SIZE -
sizeof (in_port_t) -
sizeof (struct in_addr)];
};
GDB shows sizeof( sockaddr_in ) = 16 and sizeof( sockaddr_un ) = 110.
netstat shows the waiting server but only when invoked by netstat -ap
-a = Show both listening and non-listening sockets
-p = Show the PID and name of the program to which each socket belongs
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 localhost.loca:hpvmmcontrol *:* LISTEN 4917/mysock
All sockets require a unique file descriptor (int fd). They also need a sockaddr during creation. Client and server sockets require their sockaddr to be initialized prior to creation. A server-client socket, which is created by the accept function, requires a sockaddr for accept to fill in. The server and client pass the length of the sockaddr as an rval to bind and connect, respectively, but the length is passed as an lval to accept. This must be initialized before calling accept (which may change the value). Thus, the client and server sockets require a fully-initialized sockaddr for creation but they don’t need any length object. In contrast, the server-client socket needs a sockaddr, which doesn’t have to be initialized, but it needs an initialized length object. Note that the server and client sockaddrs are initialized with identical (server) information.
Although each socket requires a unique file descriptor (int fd) multiple sockets can be created reusing the same sockaddr. This is immaterial to the server listen and client connect sockets because only one socket is created anyway. However, the server may create multiple accept sockets. For these, separating fd and sockaddr saves memory (some time although not much since the sockaddr is uninitialized). It would be possible to make an efficient Socket class including both sockaddr and fd by making the sockaddr static but this would prevent creating different socket types. However, since only the server does this and it uses the initialized sockaddr only once to create the listen socket and this preceeds using the sockaddr to create the server-client sockets, for which left-over initialization is irrelevant, this approach would be OK. I have decided not to do this because I wanted to make it clear that the sockaddr is used only for creation. A Socket class containing both fd and sockaddr would be referenced repeatedly in order to access its fd, obfuscating the fact that the sockaddr has no purpose after the socket has been created.
I have combined the generic and unix and internet sockaddrs into a union in SockAddr in order to demonstrate similarities between Unix and Internet sockets in a simple way. It wastes memory in the case of Internet because sockaddr_in comprises only 16 bytes while sockaddr_un comprises 110 bytes. However, doing this enables the use code to be oblivious to whether Unix or Internet sockets are being used. Alternatively, the two types could be derived from a base class but then they could only be instatiated by new and delete and awareness of protocol by the functions that use them. A production program would accept this complexity but the purpose of this program is demonstration rather than efficiency.
I have included the size of the socket (socklen_t len) in the SockAddr class even though only the server-client sockets need this as an object (for passing as lval to accept) because whether sockaddr is unix or internet is transparent after creation except for its length, which differs according to type. Including it in the class means that the sockaddr can be used without knowing its type.
The purpose of my servResponse function is to demonstrate server response to client. Client sends one character. It is incremented and returned. Then the socket is closed. This is separated from the main server function to simplify sharing between forking and non-forking configuration.
On Linux, select() modifies timeout to reflect the amount of time not slept. To
measure sub-second delays using select:
struct timeval {
long tv_sec; /* seconds */
long tv_usec; /* microseconds */
};
int select(int nfds, fd_set *readfds, fd_set *writefds,
fd_set *exceptfds, struct timeval *timeout);
My servLoop function is the general server for all forms not based on
select
. In all cases, this loops forever, calling accept, which blocks
until a child connects. In the simplest form, inline response, we simply call
servResponse to read and reply to the one char we are expecting from the
contrived client. servResponse closes the server’s client socket after writing.
In the fork form, fork is called, cloning this entire process. fork returns to
both the parent (original) and child (clone) process. The parent closes its
reference to the child socket and continues looping. The child process calls
servResponse. Neither inline nor fork response is reliable. If the server is
shut down and restarted the client’s connect request may be denied.
The function selectServ is my socket server based on select
. When
it is invoked the caller has already created the server socket (socket+bind+
listen) and passes us the server socket file descriptor. Here we add the server
socket to the fd_set and begin a continuous loop. In the loop we call select,
passing the file set, which initially contains only the server socket. select
returns when a client connects. Iterating over possible fd values from 3 (0, 1,
and 2 are stdin/out/err) to maxfd (inclusive) we invoke FD_ISSET to find the
first fd with activity. In this application only one will. First it will be on
the server socket, which means that the client is connecting. We call accept to
create a socket for the client. The fd is added to the file set.Subsequently,
select may indicate activity on either the server or the first (or any
additional client) sockets.
The only activity on the server socket will be a client connecting. When the client writes into the socket (in this program only the character 'A') select reports activity on the client socket. When the client closes its socket, select will indicate activity on the client socket. To distinguish between a client’s write and close, we just try to read the one character we are expecting. If read returns 0, the socket has closed. -1 is an error but we treat it the same as closing. Our response is to close the server- client socket and remove its fd from the file set.
In this program performance is not a concern but, to illustrate one aspect of it, I don’t test all FD_SETSIZE (1024) file descriptors as is done in some examples but only from 3 to maxfd. maxfd is initially the server’s socket. Whenever accept gives us a client socket higher than maxfd, we bump up maxfd. The increase levels off as new clients are offset by departing ones. Nevertheless, I don’t reduce maxfd because that is too much trouble.
Two fd_sets are used because select will modify the one passed to it, so I need to keep a copy or else record each client fd in some other way and continually rebuild the fd_set passed to select.
My server
function is the front end for all types of servers. It
does some housekeeping required for all types and then launches the selected
type.
Some sources suggest using signal( SIGCHLD, SIG_IGN )
to prevent
zombie processes when forking a child but not waiting for it to complete. man
signal says to not use signal
at all but sigaction
and specifically says to catch SIGCHLD and wait in the handler. This is too
complicated for this application. info (emacs ^hi libc - Signal Handling -
Standard Signals - Job Control Signals) describes SIGCHLD as being sent when to
the parent when a child terminates and says that the default is to ignore. So
maybe the best thing is to do nothing here.
accept
blocks unless the socket characteristics are modified by
fcntl, in which case accept
supposedly returns EWOULDBLOCK if
there are no pending connections. It actually returns -1.
fcntl( servSock.fd, F_SETFL, O_NONBLOCK | fcntl( servSock.fd, F_GETFL, 0 ));
while(( cliSock.fd = accept( servSock.fd, PSOCKADDR( &cliSock.addr ), &cliSock.addrLen )) < 0 ) ;
#!/bin/bash ./mysock sie & ./mysock cip & ./mysock cip & ./mysock cip & ps x | grep mysock #./mysock sie & ./mysock cip & ./mysock cip & ./mysock cip & ps -f #./mysock sif & ./mysock cip & ./mysock cip & ./mysock cip & ps x | grep mysock #./mysock sue & ./mysock cup & ./mysock cup & ./mysock cup & ps x | grep mysock #./mysock suf & ./mysock cup & ./mysock cup & ./mysock cup & ps x | grep mysock read # Use user input to delay for all transactions to finish before killing the server. if ps | grep mysock ; then killall mysock ; fi
tsock is a bash script for testing mysock by starting mysock server in background and immediately starting several clients.
hostserv.cpp #include <sys/socket.h> #include <netinet/in.h> #include <arpa/inet.h> #include <netdb.h> #include <stdio.h> #include <unistd.h> #include <stdlib.h> #include <string.h> #include <ctype.h> typedef struct sockaddr * PSOCKADDR; void demoGetnameinfo( struct sockaddr_in* sa ) { char host[ 50 ]; char service[ 100 ]; getnameinfo( (PSOCKADDR)sa, sizeof( *sa ), host, sizeof( host ), service, sizeof( service ), 0 ); printf( "Demo getnameinfo: host=%s service=%s\n", host, service ); } bool isHelpReq( char *arg ) { char* cp; if( *arg == '?' ) return true; for( cp = arg ; ispunct( *cp ) ; cp++ ) ; return strcasecmp( cp, "H" ) == 0 || strcasecmp( cp, "HELP" ) == 0; } char DayTime[] = "daytime"; char LocalHost[] = "localhost"; char ThisPgm[] = "hostserv"; int main( int argc, char **argv ) { char buf[ 128 ]; struct addrinfo hints; struct addrinfo* ai; struct sockaddr_in* sa; int stat; int fd; ssize_t len; int idx; char* hostName = LocalHost; char* serviceName = DayTime; int sockType = SOCK_STREAM; int ret = 0; if( argc > 1 ) { if( isHelpReq( argv[1])) { printf( "Usage: hostserv [d|s] [host hostName] [service serviceName]\n" ); printf( "d = datagram s = stream (default)\n" ); printf( "Default host is \"localhost\"\n" ); printf( "Default service is \"daytime\"\n" ); return 0; } for( idx = 1 ; idx < argc ; idx++ ) { if( strlen( argv[ idx ]) == 1 ) switch( toupper( argv[ idx ][0])) { case 'D': sockType = SOCK_DGRAM; break; case 'S': sockType = SOCK_STREAM; break;; default: goto argErr; } else if( strcasecmp( argv[ idx ], "host" ) == 0 ) hostName = argv[ ++idx ]; else if( strcasecmp( argv[ idx ], "service" ) == 0 ) serviceName = argv[ ++idx ]; else { argErr: printf( "Unrecognized option %s\n", argv[ idx ]); return 1; } } } memset(&hints, 0, sizeof(struct addrinfo)); hints.ai_family = AF_INET; // AF_INET, INET6, AF_UNSPEC hints.ai_socktype = sockType; // SOCK_STREAM or SOCK_DGRAM hints.ai_flags = 0; hints.ai_protocol = 0; // 0 = any protocol printf( "getaddrinfo for \"%s\" \"%s\" socktype %d (%s)\n", hostName, serviceName, sockType, sockType == SOCK_STREAM ? "STREAM" : "DGRAM" ); stat = getaddrinfo( hostName, serviceName , &hints, &ai ); if( stat != 0 ) { printf( "getaddrinfo: %s\n", gai_strerror( stat )); return 1; } printf( "flags=%u family=%d socktype=%d protocol=%d addrlen=%u \ canonname=%s next=%p\n", ai->ai_flags, ai->ai_family, ai->ai_socktype, ai->ai_protocol, ai->ai_addrlen, ai->ai_canonname, ai->ai_next ); // getaddrinfo assigns ai_addr the address of a sockaddr_in, // which can be used to connect a socket to the host service. sa = (struct sockaddr_in *)ai->ai_addr; short saport = ntohs( sa->sin_port ); char* saddr = inet_ntoa( sa->sin_addr ); printf( "sockaddr_in (*ai_addr) is family=%d port=%d addr=%s\n", sa->sin_family, saport, saddr ); demoGetnameinfo( sa ); fd = socket( AF_INET, sockType, 0 ); if( fd == -1 ) { perror( ThisPgm ); return 1; } if( sockType == SOCK_DGRAM ) { printf( "UDP request to %s port %d\n", saddr, saport ); if( sendto( fd, // int s is socket file descriptor buf, // const void *buf is message sent to port as dgram 1, // size_t len of message. Meaningless for daytime. 0, // int flags PSOCKADDR( sa ), // const struct sockaddr *to sizeof( *sa ) // socklen_t tolen ) == -1 ) { perror( ThisPgm ); return 1; } printf( "sendto succeeded\n" ); socklen_t slen = sizeof( *sa ); len = recvfrom( fd, buf, sizeof( buf ), 0, PSOCKADDR( sa ), &slen ); printf( "recvfrom succeeded\n" ); } else { printf( "Connect to %s port %d\n", saddr, saport ); if( connect( fd, PSOCKADDR( sa ), sizeof( *sa )) == -1 ) { perror( "service" ); ret = 2; } len = read( fd, buf, sizeof( buf )); } if( len < 1 ) { perror( ThisPgm ); return 2; } buf[ len ] = 0; printf( "%s\n", buf ); close( fd ); freeaddrinfo(ai); return ret; }
hostserv.cpp is a Linux user-mode console program to demonstrate sockets and IP services using daytime service in both stream (tcp) and dgram (udp) mode. This requires xinetd daemon to be running and daytime-stream and daytime-dgram services turned on. It demonstrates
daytime is a standard UDP and/or STREAM service on port 13. It is typically used only for testing. It is one of the port/services handled by the Internet daemon super-server xinetd. When we ask getaddrinfo for local host daytime service, it always gives us (*addrinfo.addr) a correct socket even if the service is not available. If we attempt to connect when the service isn’t available, the OS says "service: Connection refused". For daytime to be available xinetd must be running and daytime service enabled. The standard Fedora distro doesn’t include xinetd but the System > Admin > Add/Remove Software can find, download, and install it. This process automatically configures it to be included at boot. This should be disabled after testing unless there is some other use for it (e.g. telnet server). Installation of xinetd creates the /etc/xinetd.d directory, under which is a directory for each available service. There is no "daytime" directory but there is a "daytime-stream" and a "daytime-dgram". As root (or su) do the following for testing:
If connection is still refused, use ps -A to verify that xinetd is running. chkconfig xinetd off (to make sure it doesn’t automatically start at boot). It is also possible to turn xinetd and daytime-stream (or dgram) through the GUI dialog System > Adminstration > Services or using my inet bash script.
struct addrinfo {
int ai_flags;
int ai_family;
int ai_socktype;
int ai_protocol;
size_t ai_addrlen;
struct sockaddr *ai_addr; On return from getaddrinfo, this points to a
sockaddr_in, which can be used to connect a socket to the host service.
char *ai_canonname;
struct addrinfo *ai_next;
};
From in.h
struct sockaddr_in
{
__SOCKADDR_COMMON (sin_);
in_port_t sin_port; Port number.
struct in_addr sin_addr; Internet address.
Pad to size of `struct sockaddr'.
unsigned char sin_zero[sizeof (struct sockaddr) -
__SOCKADDR_COMMON_SIZE -
sizeof (in_port_t) -
sizeof (struct in_addr)];
};
Both UDP and STREAM daytime services are available. Under /etc/xinetd.d we see daytime-dgram and daytime-stream directories and, these each is indivdually enabled by chkconfig daytime-dgram on and chkconfig daytime-stream on. However, the service argument to getaddrinfo is the generic name "daytime". The hints argument specifies whether we want SOCK_STREAM or SOCK_DGRAM.
Some code references call getservbyname
but this is obsolete,
having been replaced by getaddrinfo
, which combines the host and
service name. That is what I have used in my program. Some have also suggested
that getservbyname can confirm "that the service exists". This is misleading.
Both getaddrinfo and getservbyname tell that the service exists even when it is
not turned on. If xinetd is not running or it is but daytime-stream or
daytime-dgram is not turned on, these functions still report it as available.
The purpose of demoGetnameinfo is to demonstrate the Linux getnameinfo function, the inverse of getaddrinfo. The one argument, sockaddr_in, describes an IP address and service. For this test, the argument is taken from the return from getaddrinfo, having been passed the IP address and service. A practical application might be to get the host name of a peer that has connected to this computer.
#!/bin/bash # inet turns xinetd daemon on (no argument) # and off (argument = 0) # I use this only for turning on the daytime service, # which I use only for testing socket programs if [ "$1" = "0" ] ; then if ! ps -A | grep xinetd ; then echo "xinetd is not running" else echo "Turning off xinetd daemon requires root password" su -c 'service xinetd stop' fi else if ps -A | grep xinetd ; then echo "xinetd is already running" else echo "Turning on xinetd daemon requires root password" # After installing xinetd, it may be necessary to turn # on daytime using chkconfig and then turn off and then # back on xinetd. However, after this one time, simply # turning on xinetd seems to be sufficient. # chkconfig daytime-stream on # su -c 'service xinetd stop' su -c 'service xinetd start' ps -A | grep xinetd fi fi
svsem.cpp #include <sys/stat.h> #include <stdio.h> #include <unistd.h> #include <stdlib.h> #include <pthread.h> #include <ctype.h> #include <errno.h> // My headers: #include "cdefs.h" #include "conUtil.h" #include "svUtil.h" #include "posUtil.h" #include <sys/ipc.h> // IPC_PRIVATE, IPC_CREAT etc. #include <sys/sem.h> // semget, semctl, semop #ifdef _SEM_SEMUN_UNDEFINED // copy from man semctl or <bits/sem.h> union semun { int val; /* Value for SETVAL */ struct semid_ds *buf; /* Buffer for IPC_STAT, IPC_SET */ unsigned short *array; /* Array for GETALL, SETALL */ struct seminfo *__buf; /* Buffer for IPC_INFO (Linux-specific) */ }; #endif char memName[] = "/svSemMem"; typedef struct { int semId; int testCnt; } SharedMem; SysvSem* psem; // Shared by Thread and Main int gTest; char msg[100]; void sleepMs( ULONG ms ) { struct timeval tv = { 0, ms * 1000 }; select( 0, 0, 0, 0, &tv ); } void showUtilErr( UtilThrowErr err ) { if( err.msg[0] == 1 ) perror( err.msg + 1 ); else printf( err.msg ); } void* yieldThread( void* arg ) { printf( "Thread: begin\n" ); printf( "Thread: I'm now going to wait on the semaphore.\n" ); psem->wait(); if( gTest == 0 ) { gTest = 1; ConWidPrint( "Thread: return from wait. Main yielded to me. I will \ signal Main to wake up.\n" ); psem->post(); // To release Main. } else ConWidPrint( "Thread: return from wait. Main did not yield.\n" ); printf( "Thread: done\n" ); return 0; } void demoThreadYield( void ) { pthread_t tid; ConWidPrint( "Demonstrate thread yield.\n" ); printf( "Main: creating unsignaled semaphore.\n" ); try { psem = new SysvSem( 1, 0 ); gTest = 0; printf( "Main: launching Thread.\n" ); pthread_create( &tid, 0, yieldThread, 0 ); sleep(1); ConWidPrint( "Main: I will post to the semaphore but then \ immediately try to wait on it myself.\n" ); psem->post(); psem->wait(); if( gTest == 0 ) { gTest = 1; // Tell thread it doesn't need to wake Main. ConWidPrint( "Main: return from wait. I did not yield to Thread.\n" ); psem->post(); // To release Thread. } else ConWidPrint( "Main: return from wait. I yielded to Thread.\n" ); pthread_join( tid, 0 ); } catch( UtilThrowErr err ) { showUtilErr( err ); } delete psem; // delete checks for NULL so we don't have to. printf( "Main: done\n" ); } void demoProcYield( void ) { ConWidPrint( "Demonstrate proc yield\n" ); try { printf( "Master: creating shared memory\n" ); PosMem mem( memName, sizeof( SharedMem )); if( ! mem.mCreator ) { printf( "Master: reusing left over memory file \"%s\"\n", memName ); mem.mCreator = true; // Defense against left over // from previously aborted program. The constructor will // resize it per our request but will assign mCreator // false. Since we know that we are the only creator, we // can safely set it true so that the destructor will // unlink it. } printf( "Master: creating unsignaled semaphore\n" ); SysvSem sem( 1, 0 ); SharedMem* mp = (SharedMem*)mem.mP.v; mp->semId = sem.mId; mp->testCnt = 0; printf( "Master: launching Slave (svsema).\n" ); sprintf( msg, "./svsema 1 &" ); system( msg ); sleep(1); ConWidPrint( "Master: I will post to the semaphore but then \ immediately try to wait on it myself.\n" ); sem.post(); sem.wait(); if( mp->testCnt == 0 ) { mp->testCnt = 2; // Tell Slave it doesn't // need to wake Master. ConWidPrint( "Master: return from wait. I did not yield to Slave.\n" ); sem.post(); // Wake up Slave. } else ConWidPrint( "Master: return from wait. I yielded to Slave.\n" ); } catch( UtilThrowErr err ) { showUtilErr( err ); } printf( "Main: done\n" ); sleep(1); // To be sure that menu prints // after last Slave message. } void demoCreation( void ) { USHORT usvals[10]; union semun semarg; semarg.array = usvals; int idx; int test; SysvSem* psem; ConWidPrint( "Demonstrate unusual SysvSem creation\n" ); try { for( test = 0 ; test < 3 ; test++ ) { switch( test ) { case 0: printf( "Create set of 10 all initialized to default 0.\n" ); psem = new SysvSem( 10, -1 ); break; case 1: printf( "Create set of 10 initialized to 1.\n" ); psem = new SysvSem( 10, 1, -1 ); break; case 2: printf( "Create set of 10 initialized to 3, 2, 1, 0...\n" ); psem = new SysvSem( 10, 3, 2, 1, 0, -1 ); break; } semctl( psem->mId, 0, GETALL, semarg ); printf( "The values (via semctl-GETALL) are:\n" ); for( idx = 0 ; idx < 10 ; idx++ ) printf( "%d ", usvals[ idx ]); putchar( '\n' ); delete psem; } } catch( UtilThrowErr err ) { showUtilErr( err ); } } void* multiValThread( void* arg ) { struct sembuf waitOn0For3 = { 0, -3, 0 }; printf( "Thread: begin.\n" ); printf( "Thread: waiting on semaphore 0 value at least 3\n" ); semop( psem->mId, &waitOn0For3, 1 ); if( gTest == 0 ) { gTest = 1; ConWidPrint( "Thread: return from wait. Main yielded to me. I will \ signal Main to wake up.\n" ); psem->post(); // To release Main. } else ConWidPrint( "Thread: return from wait. Main did not yield.\n" ); printf( "Thread: done\n" ); return 0; } void* multiSemThread( void* arg ) { struct sembuf waitMultiple[] = { { 0, -1, 0 }, { 2, -1, 0 }, { 5, -1, 0 }}; printf( "Thread: begin.\n" ); printf( "Thread: waiting for semaphores 0, 2, and 5 to be signaled \ (value at least 1).\n" ); semop( psem->mId, waitMultiple, 3 ); if( gTest == 0 ) { gTest = 1; ConWidPrint( "Thread: return from wait. Main yielded to me. I will \ signal Main to wake up.\n" ); psem->post(); // To release Main. } else ConWidPrint( "Thread: return from wait. Main did not yield.\n" ); printf( "Thread: done\n" ); return 0; } void demoMultiple( void ) { pthread_t tid; int idx; ConWidPrint( "Demonstrate operations on multiple semaphores and on \ values larger than 1. We will show, using threads, that \ a poster yields to a waiter only when all wait conditions \ are satisfied.\n" ); try { ConWidPrint( "First we will demonstrate wait on a single semaphore \ for a value larger than 1.\n" ); ConWidPrint( "Main: creating one uninitialized semaphore.\n" ); psem = new SysvSem( 1, 0 ); gTest = 0; printf( "Main: spawning thread.\n" ); pthread_create( &tid, 0, multiValThread, 0 ); sleep(1); ConWidPrint( "Main: I will post (1) to the semaphore three times \ and then try to wait on it myself\n" ); for( idx = 0 ; idx < 3 ; idx++ ) { printf( "Main: posting +1\n" ); psem->post(); } psem->wait(); if( gTest == 0 ) { gTest = 1; // Tell thread it doesn't need to wake Main. ConWidPrint( "Main: return from wait. I did not yield to Thread.\n" ); psem->post(); // To release Thread. } else ConWidPrint( "Main: return from wait. I yielded to Thread.\n" ); pthread_join( tid, 0 ); delete psem; ConWidPrint( "Now we will demonstrate wait on multiple semaphores.\n" ); ConWidPrint( "Main: creating six uninitialized semaphore.\n" ); psem = new SysvSem( 6, -1 ); gTest = 0; printf( "Main: spawning thread.\n" ); pthread_create( &tid, 0, multiSemThread, 0 ); sleep(1); ConWidPrint( "Main: I will post (1) to semaphores 0, 2, and 5 and then \ try to wait on 0 myself\n" ); printf( "Main: posting sem 0\n" ); psem->post( 0 ); printf( "Main: posting sem 2\n" ); psem->post( 2 ); printf( "Main: posting sem 5\n" ); psem->post( 5 ); psem->wait(); if( gTest == 0 ) { gTest = 1; // Tell thread it doesn't need to wake Main. ConWidPrint( "Main: return from wait. I did not yield to Thread.\n" ); psem->post(); // To release Thread. } else ConWidPrint( "Main: return from wait. I yielded to Thread.\n" ); pthread_join( tid, 0 ); delete psem; } catch( UtilThrowErr err ) { showUtilErr( err ); } } int main( int argc, char **argv ) { int ch; conOutPrep(); conSetRawInput( true ); while(1) { ConWidPrint( "Press Q=quit, T=thread yield, P=proc yield, C=Creation, M=Mutiple\n"); ch = ConPromptNoMsg(); switch( toupper( ch )) { case 'Q': case EOF: conSetRawInput( false ); return 0; case 'T': demoThreadYield(); break; case 'P': demoProcYield(); break; case 'C': demoCreation(); break; case 'M': demoMultiple(); break; default: break; } } return 0; }
Although Posix semaphores are easier to use than System V they have a glaring dificiency in some circumstances. System V semaphores are Hoare type, which means that when the signaler (V in Dijkstra’s teminology) increases a semaphore off 0 it automatically yields to any waiter (Dijkstra’s P). Posix semaphores are Mesa type, which do not yield. In some cases the difference is immaterial but when V-P have a producer-consumer relationship Hoare type are more likely to avoid bottlenecks because the completion of the consumer process frees the resource. There are also certain event signaling relationships that can be reliably enforced by Hoare but not by Mesa. In any case, the Hoare behavior is deterministic, simplifying work-arounds even when it is not desired. There is no predicting when the signaler of a Mesa is going to yield, reducing the avenues available for controlling the situation.
It has been suggested that Hoare behavior can be emulated with Posix semaphores
by calling the system function sched_yield
immediately after
signaling a semaphore. I have found that this has practically no effect on
yield delay while the delay resulting from calling sleep
is
noticable but unpredictable. Regardless of this problem, anything can happen
immediately if the signaler doesn’t yield so the suggestion that Hoare can
somehow be simulated is fundamentally wrong. The only way to get Hoare behavior
in Linux is to use System V semaphores.
The purpose of svsem.cpp is to demonstrate System V semaphores. Threads are used but that is not the purpose. For that see thread.cpp. It also demonstrates semaphores between procs. For that, the svsema program is spawned. These tests show that the posting thread yields to the waiting thread immediately when all wait conditions are satisfied. These tests also show how to use SysvSem class, alone and in conjunction with the underlying semop for unusual operations. They coincidentally show how to use the PosMem class, which provides the memory shared between procs.
BUILD
This must be compiled with -D_REENTRANT because it has a thread. It must be
linked with -lpthread or -lrt for thread and for semaphore functions. It also
must be linked with conUtil, svUtil and posUtil and is dependent on conUtil.h,
svUtil.h, and posUtil.h
svsem: svsem.o conUtil.o svUtil.o posUtil.o
$(CCR) -o $@ $^ -lrt
SUPPORT PROGRAM
Named Semaphore (select N) demo requires support program svsema.
Most of these tests demonstrate poster-waiter thread and proc yielding. All confirm that the poster yields when all of the waiter’s conditions are met. Printed messages show this but are a little difficult to quickly decipher. To simplify verication, the poster and waiter read and write a global variable in a way that shows both who yielded. For threads, the variable is module global gTest. For procs, it is SharedMem.testCnt.
The poster initializes the variable to 0 and then spawns the waiter, which waits on the unsignaled semaphore or semaphores. Then the poster signals (posts) the semaphore and immediately waits on it. Both poster and waiter do the same thing on return from wait. If the variable is still 0 they report that they are the winner; they assign 1 to the variable; and they post to the semaphore to wake up the other to keep it from hanging.
THREAD YIELD (T)
Simple demonstration of SysvSem class and that (POSIX) threads adhere to the
system v semaphore behavior where a thread posting to the semaphore yields to
a thread waiting on it if the post satisfies the wait conditions. In this
example, the conditions are the simplest possible, a value of at least 1 on
one semaphore. For more complex wait conditions see DEMO MULTIPLE.
Output Trace:
Demonstrate thread yield.
Main: creating unsignaled semaphore.
Main: launching Thread.
Thread: begin
Thread: I'm now going to wait on the semaphore.
Main: I will post to the semaphore but then immediately try to wait on it myself.
Thread: return from wait. Main yielded to me. I will signal Main to wake up.
Thread: done
Main: return from wait. I yielded to Thread.
Main: done
PROC YIELD (P)
Demonstrate posting proc yield to waiting proc. This coincidentally
demonstrates use of PosMem class wrapper for POSIX shared memory. A second
process is needed for this test. It is provided by svsema (in svsema.cpp). To
share information via a memory mapped file, svsem.cpp and svsema.cpp both embed
the file name and structure mapping memory use. svsem is the designated
creator, which it indicates to the PosMem constructor by passing a non-0 size
argument. The memory is used for two items, the semaphore ID passed by Master
to Slave and the variable by which the two exchange their state information to
determine the yield winner when one posts while the other waits. The file name
and struct could also be defined in a header file included by both programs.
Output Trace:
Demonstrate proc yield
Master: creating shared memory
Master: creating unsignaled semaphore
Master: launching Slave (svsema).
Slave: begin
Slave: opening shared memory
Slave: opening semaphore
Slave: I'm now going to wait on the semaphore
Master: I will post to the semaphore but then immediately
try to wait on it myself.
Slave: return from wait. Master yielded to me. I will signal
Master to wake up.
Master: return from wait. I yielded to Slave.
Main: done
Slave: done
CREATION (C)
This just tests the value repetition feature of the SysvSem constructor.
Semaphore sets are created and discarded without being used. This
coincidentally is an example of mixing SysvSem class with calls directly
to semctl.
Output Trace:
Demonstrate unusual SysvSem creation
Create set of 10 all initialized to default 0.
The values (via semctl-GETALL) are:
0 0 0 0 0 0 0 0 0 0
Create set of 10 initialized to 1.
The values (via semctl-GETALL) are:
1 1 1 1 1 1 1 1 1 1
Create set of 10 initialized to 3, 2, 1, 0...
The values (via semctl-GETALL) are:
3 2 1 0 0 0 0 0 0 0
MULTIPLE (M)
This shows that the yield behavior of system v semaphores extendeds to higher
values and multiple semaphores. Posts that don’t satisfy the wait condition do
not cause the thread to yield but when all conditions are met the posting
thread immediately yields. This also demonstrates how to mix SysvSem class with
semop to implement complex operations not provided by SysvSem itself.
Output Trace:
Demonstrate operations on multiple semaphores and on values larger than 1. We will
show, using threads, that a poster yields to a waiter only when all wait conditions
are satisfied.
First we will demonstrate wait on a single semaphore for a value larger than 1.
Main: creating one uninitialized semaphore.
Main: spawning thread.
Thread: begin.
Thread: waiting on semaphore 0 value at least 3
Main: I will post (1) to the semaphore three times and then try to wait on it myself
Main: posting +1
Main: posting +1
Main: posting +1
Thread: return from wait. Main yielded to me. I will signal Main to wake up.
Thread: done
Main: return from wait. I yielded to Thread.
Now we will demonstrate wait on multiple semaphores.
Main: creating six uninitialized semaphore.
Main: spawning thread.
Thread: begin.
Thread: waiting for semaphores 0, 2, and 5 to be signaled (value at least 1).
Main: I will post (1) to semaphores 0, 2, and 5 and then try to wait on 0 myself
Main: posting sem 0
Main: posting sem 2
Main: posting sem 5
Thread: return from wait. Main yielded to me. I will signal Main to wake up.
Thread: done
Main: return from wait. I yielded to Thread.
svsem.cpp includes header files and contains declarations needed only for the atypical demonstrations in this program. For normal use of system v semaphores, an application only needs to include my svUtil.h and does not need to define semun.
The purpose of this function is to print UtilThrowErr message. If the message begins with the value 1 (not '1') it is intended to be the argument to perror. Otherwise the msg is a string whose last char is newline. This should be printed as-is.
The purpose of this function is to demonstrate that posting to a system v semaphore on which a (POSIX) thread is waiting always causes the posting thread to yield to the waiter, i.e. that System V semaphores are Hoare type even if the waiting thread is Posix, whose semaphores are Mesa type.
This uses new
to create the system v semaphore wrapper object
instead of simply declaring a local SysvSem because it needs to share this with
the Thread, which it does via the global SysvSem pointer psem. This means that
we have to take responsibility for deleting the object. Since all paths exit
after catch, we can just put the delete there. Alternatively, we could have an
automatic SysvSem in both this and Thread, with the ID passed from here to
Thread via global int.
The purpose of this function is demonstrate that when a proc posts to a System V semaphore on which another proc is waiting, the posting proc yields to the waiter. This coincidentally demonstrates use of PosMem class wrapper for POSIX shared memory.
If the memory file already exists it is left over from an aborted execution. Since we know that we are the only creator we could simply assign mem.mCreator true without checking. However, as part of this demonstration, we are checking and reporting. To exercise this, create the left over by selecting this demo and aborting with ^C. Note svSemMem file in /dev/shm. Then run the program again. The file disappears as soon as the demo finishes. It doesn’t hang around until the program exits. The message does not appear after the first execution of the demo.
Global memory created from POSIX shm_open via PosMem class is used to convey the semaphore ID to Slave (svsema) and for Master and Slave to exchange state information to determine the winner at the time of posting. Master and Slave both embed the name of the memory file (memName) and its structure, SharedMem. The memory is essential for Master and Slave to exchange state information but is not to using the semaphore. Master could pass the semaphore ID to Slave (program svsema) as a command argument. Shared memory is used for demonstration. If svsema could be invoked independently the memory means would be required.
The purpose of this is to test the value repetition feature of the SysvSem constructor. SysvSem has no class function to read semaphore values because this is not typical and such a function would afford no convenience. To read back the values for testing, semctl is invoked directly, which requires special includes and definitions, as explained in module notes above. The value repetition feature, like all SysvSem features, requires only include svUtil.h
Three sets of 10 semaphores are created (one at a time) with all values default 0 by the constructor arguments (10, -1); with all values 1 by arguments (10, 1, -1); and with the values 3, 2, 1, 0... by the arguments (10, 3, 2, 1, 0, -1)
This demonstrates operating on multiple semaphores and values larger than 1. Both of these kinds of operations require direct calls to semop but the SysvSem class still provides basic functions. These tests demonstrate how to mix SysvSem with direct semop for unusual operations. The also show that the yielding behavior extends consistently, that is the posting thread yields when all of the conditions specified by the waiting thread are met. The test shows yield to (this) thread waiting for some value greater than 1 only when that condition is met. This waits for semaphore 0 value to be at least 3.
This demonstrates yield to (this) thread waiting for multiple semaphores (0, 2, and 5) values to be at least 1.
This shows that a poster thread yields to a waiter when all of the specified wait conditions are met. Demonstrated for thread waiting for a value greater than 1 on one semaphore and for a thread waiting for multiple semaphores to be signaled (value at least 1).
mutsem.cpp #include <sys/stat.h> #include <sys/select.h> #include <sys/mman.h> #include <stdio.h> #include <unistd.h> #include <stdlib.h> #include <string.h> #include <pthread.h> #include <semaphore.h> #include <fcntl.h> #include <ctype.h> #include <errno.h> #include <time.h> // #include <sched.h> for sched_yield, which doesn't help mutex anyway. #include "cdefs.h" #include "conUtil.h" char msg[100]; char semName[] = "/MySemaphore"; sem_t gUnSem; pthread_mutex_t gMutex = PTHREAD_MUTEX_INITIALIZER; void sleepMs( ULONG ms ) { struct timeval tv = { 0, ms * 1000 }; select( 0, 0, 0, 0, &tv ); } sem_t* createNamedSem( UINT val ) { sem_t *ps; printf( "Creating a semaphore named %s\n", semName ); ps = sem_open( semName, O_CREAT, S_IRUSR | S_IWUSR, val ); if( ps == SEM_FAILED ) { perror( 0 ); return 0; } return ps; } void unlinkNamedSem( void ) { if( sem_unlink( semName ) != 0 ) perror( 0 ); } void demoNamedSem( void ) { sem_t* ps; printf( "Demonstrate named semaphore\n" ); printf( "First, simple creation and destruction of \"MySemaphore\" in /dev/shm\n" ); printf( "Please open a window on /dev/shm\n" ); ConPromptAny(); ps = createNamedSem(0); if( ps == 0 ) return; ConPrompt( "Press any key to delete the semaphore\n" ); unlinkNamedSem(); ConPromptAny(); // For this next test, the semaphore is created unsignaled // (sem_open value = 0). The slave cannot change this when // it calls sem_open. This works regardless of actual // values. Only the first call to sem_open for a given // semaphore name can set the value. ConWidPrint( "Now the same semaphore is created and used to \ communicate with another process.\n" ); ps = createNamedSem(0); ConWidPrint( "Master: Launching mutsema in BG\n" ); system( "./mutsema 0 &" ); // Invoke Slave to run in bg, i.e. // concurrently with Master. sleep(1); // For Slave begin: Creating a semaphore named // /MySemaphore and Slave: waiting on semaphore. ConWidPrint( "Master: I'm going to unlink the semaphore, causing its \ file to disappear. However, the semaphore itself persists \ until the Slave also unlinks from it.\n" ); ConPromptAny(); unlinkNamedSem(); ConPrompt( "Master: press any key and I will unlock the semaphore.\n" ); printf( "Master: Unlocking the semaphore\n" ); sem_post( ps ); sleep(1); // For Slave: somebody woke me up. printf( "Master: Now I'm going to wait on the semaphore\n" ); sem_wait( ps ); printf( "Master: Slave just woke me up.\n" ); sleep(1); // For Slave: end message // At this point the semaphore should not exist but // testing shows that both sem_post and sem_wait do // not cause a segment fault. printf( "Master: end\n" ); sleep(1); // To be sure menu displays last. } void* semCountThread( void *arg ) { pthread_detach( pthread_self()); printf( "Begin thread %d\n", (int)arg ); sem_wait( &gUnSem ); printf( "End thread %d\n", (int)arg ); return 0; } void demoSemValue( void ) { int val; pthread_t pt; ConWidPrint( "Demonstrate counting semaphore value.\n" ); sem_init( &gUnSem, 0, 0 ); if( sem_getvalue( &gUnSem, &val ) < 0 ) { perror( "DemoSemValue" ); return; } printf( "The initial value is %d\n", val ); sem_post( &gUnSem ); sem_post( &gUnSem ); sem_post( &gUnSem ); sem_getvalue( &gUnSem, &val ); printf( "After 3 calls to sem_post, the value is %d\n", val ); sem_wait( &gUnSem ); sem_getvalue( &gUnSem, &val ); printf( "After 1 call to sem_wait, the value is %d\n", val ); sem_wait( &gUnSem ), sem_wait( &gUnSem ); sem_getvalue( &gUnSem, &val ); printf( "After 2 more calls to sem_wait, the value is %d\n", val ); printf( "I'm going to launch 4 threads, all of which wait on the semaphore.\n" ); for( int idx = 0 ; idx < 4 ; idx++ ) pthread_create( &pt, 0, semCountThread, (void*)idx ); sleep(1); sem_getvalue( &gUnSem, &val ); printf( "The value is %d\n", val ); for( int idx = 0 ; idx < 4 ; idx++ ) { printf( "Calling sem_post\n" ); sem_post( &gUnSem ); sleep(1); } sem_destroy( &gUnSem ); } ULONG delayThread[] = { 100, 100, 400, 200 }; ULONG delayMain[] = { 300, 200, 200, 200 }; char accRec[10]; // Access record. Each character is '0' for main or '1' for // thread. 0 and 1 are used instead of M and T because they are more easily // distinguished when the record is printed. int accIdx = 0; // Index to accRec. int exclUse; // 0 = use mutex, 1 = use semaphore char* exclName[2] = { (char*)"mutex", (char*)"semaphore" }; void enterExcl( void ) { if( exclUse == 0 ) pthread_mutex_lock( &gMutex ); else sem_wait( &gUnSem ); } void leaveExcl( void ) { if( exclUse == 0 ) pthread_mutex_unlock( &gMutex ); else sem_post( &gUnSem ); } void* exclThread( void* arg ) { for( int cnt = 0 ; cnt < 4 ; cnt++ ) { sleepMs( delayThread[ cnt ]); enterExcl(); accRec[ accIdx++ ] = '1'; printf( "Thread: I now have access.\n" ); printf( "The message is %s\n", msg ); sprintf( msg, "Thread's count is %d", cnt ); leaveExcl(); } printf( "Thread: done\n" ); return 0; } void demoExclThread( void ) { pthread_t tid; ConWidPrint( "Demonstrate thread exclusion by mutex and by semaphore\n" ); sprintf( msg, "Initialized" ); pthread_mutex_init( &gMutex, 0 ); sem_init( &gUnSem, 0, 1 ); for( exclUse = 0 ; exclUse < 2 ; exclUse++ ) { accIdx = 0; printf( "Main: starting thread for exclusion by %s\n", exclName[ exclUse ]); pthread_create( &tid, 0, exclThread, 0 ); for( int cnt = 0 ; cnt < 4 ; cnt++ ) { sleepMs( delayMain[ cnt ]); enterExcl(); accRec[ accIdx++ ] = '0'; printf( "Main: I now have access.\n" ); printf( "The message is %s\n", msg ); sprintf( msg, "Main's count is %d", cnt ); leaveExcl(); } printf( "Main: done\n" ); pthread_join( tid, 0 ); printf( "The last message is %s\n", msg ); accRec[ accIdx ] = 0; printf( "The access sequence was %s\n", accRec ); } } #define MEMSIZE 100 char memName[] = "/MyMemory"; void demoProcMutex( void ) { int mfd; union { void* v; UCHAR* uc; char* c; pthread_mutex_t* m; } mem; ConWidPrint( "Demonstrate POSIX mutex shared by procs\n" ); printf( "Master: opening shared memory file \"%s\".\n", memName ); mfd = shm_open( memName, O_RDWR | O_CREAT, S_IREAD | S_IWRITE ); // At this point, see MyMemory file under /dev/shm. ls shows attributes // -rw------- and length 0 and GUI calls it a plain text document. if( mfd < 0 ) { printf( "Mutsem: shm_open failed: %s\n", strerror( errno )); return; } if( ftruncate( mfd, MEMSIZE ) != 0 ) { perror( "mutsem ftruncate" ); goto ex1; } ConWidPrint( "Master: mapping the memory file into my own memory space\n" ); mem.v = mmap( 0, MEMSIZE, PROT_READ | PROT_WRITE, MAP_SHARED, mfd, 0 ); // At this point, ls shows MyMemory size as 100 (MEMSIZE). if( mem.v == MAP_FAILED ) { perror( "mutsem mmap" ); goto ex1; } // At this point, MyMemory looks just like a real file in a // bash shell. If we write some text into it here, at the // shell we can type cat MyMemory and see that text. If we // echo some text into it at the shell prompt, we can read // that back here in the program. //sprintf( mem.c, "Hi from master" ); //goto ex1; //printf( "I read back %s\n", mem.c ); pthread_mutexattr_t attr; pthread_mutexattr_init( &attr ); pthread_mutexattr_setpshared( &attr, PTHREAD_PROCESS_SHARED); ConWidPrint( "Master: initializing process-shared mutex in shared memory\n" ); pthread_mutex_init( mem.m, &attr ); ConWidPrint( "Master: If one process is waiting for the \ mutex and the other unlocks and relocks without yielding, \ Linux/POSIX does not grant the mutex to the waiting process. \ First, we will show correct operation with the holder \ explicitly yielding to allow the waiting process to lock.\n" ); printf( "Master: locking mutex\n" ); pthread_mutex_lock( mem.m ); ConWidPrint( "Master: spawning mutsema to run in its own shell in BG.\n" ); // 1 signpost system( "./mutsema 1 &" ); // Note mutsema BG has no KB input but it can print. sleepMs(200); // Let Slave print its messages so the last // line is our "press any key". ConPrompt( "Master: press any key and I will unlock the mutex.\n" ); // 4 signpost pthread_mutex_unlock( mem.m ); // I found that when unlock is followed by lock without giving up its time // slot, this process is given the lock ahead of the other process, which is // already waiting. sleepMs(50); // sched_yield(); ConWidPrint( "Master: now I will try again to lock the mutex.\n" ); // 7 signpost pthread_mutex_lock( mem.m ); ConWidPrint( "Master: I now have the lock.\n" ); sleep(1); ConWidPrint( "Master: Now we show that the unlocking process does not yield to \ a process waiting for the mutex. Slave is now waiting while I \ have the mutex. I will unlock and then lock it again without \ deliberately yielding.\n" ); printf( "Master: I will now unlock and relock the mutex.\n" ); pthread_mutex_unlock( mem.m ); pthread_mutex_lock( mem.m ); ConWidPrint( "Master: I now have the lock. If the preceding message is not from Slave \ it means that I didn't yield to Slave even though it was waiting. Now I'm \ going to unlock the mutex to let Slave finish.\n" ); pthread_mutex_unlock( mem.m ); printf( "Master: now I'm destroying the mutex\n" ); pthread_mutex_destroy( mem.m ); munmap( mem.v, MEMSIZE ); ex1: close( mfd ); // At this point both the Master and Slave have closed // their shared memory handles. However, the file and // memory still exist. In a bash shell, we can echo into it // and cat out of it. Therefore, we should unlink it. // Otherwise it continues to consume resources until the // computer reboots. shm_unlink( memName ); printf( "Master: done\n" ); sleep(1); // To be sure that the menu is at the end of display. } void* mutexYieldThread( void* arg ) { printf( "Thread: I'm going to wait for mutex.\n" ); pthread_mutex_lock( &gMutex ); printf( "Thread: return from lock. Now I have the mutex.\n" ); pthread_mutex_unlock( &gMutex ); printf( "Thread: done.\n" ); return 0; } void* semYieldThread( void* arg ) { printf( "Thread: I'm going to wait for semaphore.\n" ); sem_wait( &gUnSem ); printf( "Thread: return from sem wait. Now I have the lock.\n" ); printf( "Thread: unlocking by sem post.\n" ); sem_post( &gUnSem ); printf( "Thread: done.\n" ); return 0; } void demoThreadYield( void ) { pthread_t tid; ConWidPrint( "This demonstrates that with a POSIX thread mutex or semapohre, \ an unlocking thread does not yield to another thread waiting for the mutex.\n" ); printf( "Main: initializing and locking mutex.\n" ); pthread_mutex_init( &gMutex, 0 ); pthread_mutex_lock( &gMutex ); printf( "Main: launching thread.\n" ); pthread_create( &tid, 0, mutexYieldThread, 0 ); sleep(1); printf( "Main: Now I'm going to unlock and relock the mutex.\n" ); pthread_mutex_unlock( &gMutex ); pthread_mutex_lock( &gMutex ); ConWidPrint( "Main: I have locked the mutex again. If this message is \ preceded by one from Thread, it means a coincidental yield \ allowed Thread to gain the semphore. Otherwise I have regained \ it because I wasn't forced to yield.\n" ); printf( "Main: Now I will unlock it again and give Thread a chance to finish.\n" ); pthread_mutex_unlock( &gMutex ); pthread_join( tid, 0 ); pthread_mutex_destroy( &gMutex ); ConWidPrint( "Main: next we demonstrate the same behavior with unnamed semaphore \ used like mutex (sem post = mutex unlock, sem wait = mutex lock).\n" ); printf( "Main: initializing unsignaled semaphore.\n" ); sem_init( &gUnSem, 0, 0 ); printf( "Main: launching thread\n" ); pthread_create( &tid, 0, semYieldThread, 0 ); sleep(1); ConWidPrint( "Main: Now I'm going to unlock and relock by posting \ and waiting on the semaphore.\n" ); sem_post( &gUnSem ); sem_wait( &gUnSem ); ConWidPrint( "Main: return from sem wait. If this message is preceded by Thread \ message, it is due to coincidental yield. If not, I did not yield \ to the waiting Thread.\n" ); printf( "Main: signaling semaphore to release Thread\n" ); sem_post( &gUnSem ); printf( "Main: destroying semaphore\n" ); sem_destroy( &gUnSem ); pthread_join( tid, 0 ); printf( "Main: done\n" ); } sem_t usem1; sem_t usem2; void* altThread( void *arg ) { ConWidPrint( "Thread: Begin\n" ); for( int cnt = 1 ; cnt < 5 ; cnt++ ) { printf( "Thread: waiting on sem2\n" ); sem_wait( &usem2 ); printf( "Thread: the buffer contains \"%s\"\n", msg ); printf( "Thread: now I will write to the buffer and then signal sem1\n" ); sprintf( msg, "Thread: message %d", cnt ); sem_post( &usem1 ); } printf( "Thread: done\n" ); return 0; } void demoAltAccess( void ) { pthread_t thrd; ConWidPrint( "Demonstrate alternating thread access to a shared buffer using two \ semaphores, each serving as a guard and event signal.\n" ); ConWidPrint( "Main: First, I will initialize sem1, which controls me, as signaled \ and sem2, which controls the slave, as unsignaled.\n" ); sem_init( &usem1, 0, 1 ); sem_init( &usem2, 0, 0 ); sprintf( msg, "No message" ); ConWidPrint( "Main: now I will launch Thread\n" ); pthread_create( &thrd, 0, altThread, 0 ); sleep(1); for( int cnt = 1 ; cnt < 5 ; cnt++ ) { printf( "Main: waiting on sem1\n" ); sem_wait( &usem1 ); printf( "Main: the buffer contains \"%s\"\n", msg ); printf( "Main: now I will write to the buffer and then signal sem2\n" ); sprintf( msg, "Main: message %d", cnt ); sem_post( &usem2 ); } // Ignore the last message from Thread pthread_join( thrd, 0 ); sem_destroy( &usem1 ); sem_destroy( &usem2 ); printf( "Main: done\n" ); return; } int main( int argc, char **argv ) { int ch; conOutPrep(); conSetRawInput( true ); while(1) { ConWidPrint( "Press Q=quit, N=named semaphore, V=sem value, X=excl thread, \ P=proc mutex, Y=thread yield, A=alternating access\n"); ch = ConPromptNoMsg(); switch( toupper( ch )) { case 'Q': case EOF: conSetRawInput( false ); return 0; case 'N': demoNamedSem(); break; case 'V': demoSemValue(); break; case 'X': demoExclThread(); break; case 'P': demoProcMutex(); break; case 'Y': demoThreadYield(); break; case 'A': demoAltAccess(); break; default: break; } } return 0; }
POSIX semaphore and mutex are Mesa type; they do not force the signaler (sem post or mutex unlock) to yield to a waiter. Although man sched_yield says that this function should be called after releasing a heavily contended resource, e.g. mutex, it doesn’t work. In any case, it would be expensive to do this even when no one it waiting but POSIX semaphore and mutex afford no means of determining whether any one is waiting. In contrast, System V semaphores can tell how many threads or processes are waiting but the signaler doesn’t need this information anyway because posting to one of these semaphores forces yield if anyone is waiting. See svsem.cpp and svsema.cpp for similar demos based on System V semaphores.
Normally, semaphore is used for one process to alert another while mutex is used to guard a resource to allow it to be accessed by only one process at a time. Both semaphore and mutex support a waiting process and a signaling process. sem_wait and pthread_mutex_lock are equivalent operations, causing the caller to block until allowed to proceed by the signaling process. sem_post and pthread_mutex_unlock similarly signal a semaphore or mutex, but differ significantly in other operational aspects.
The standard mutex and semaphore usage is obvious. But using a mutex for alerting and a semaphore for guarding might be useful.
Semaphore is effective for signaling because its release mechanism is atomically reset. The value goes to 0, blocking future waits, indivisibly with unblocking the waiting process. If this process tries to wait again it will block even if the signaler has done nothing. And the signaler can immediately signal again. Thus, the two processes need no other synchronization.
Mutex is not effective for signaling because its release mechanism is not automically reset but depends on the unblocked process to unlock the mutex in order to reset the mechanism. If this process unlocks when the signaler is not ready and then waits for the next event, it will immediately regain the lock even without an actual event having been signaled. Even if this problem could be overcome by ensuring that the signaler would be waiting when the signalee unlocked (and that the signaler yield, which doesn’t happen with POSIX mutex) any solution would constrain the signaler to spend most of its time blocked on waiting for the mutex.
Mutex is effective for guarding for the same reason that it is not effective for signaling. The guard means is not reset until the process that has gained the lock releases it. Importantly, guarding does not ensure against successively granting the lock to the same process. A semaphore can also be used for guarding. The semaphore is created signaled (value = 1). The first process to wait on it is immediately granted access and any others will then block. When the process is done, it signals (sem_post) and any process, including itself, can gain it although any waiting processes have priority.
Ping-pong access to a shared resource can only be guaranteed by two semaphores, one created signaled and the other unsignaled. The initially signaled semaphore controls the process that must execute first to start the ping-pong. However, there is no race condition because the other process will be blocked. Assume that X is the first process. Its sem wait immediately returns and it does its task. It then signals the controling semaphore of Y and then does whatever other work it has to do, if any, and waits again on its semaphore. But this time it will block. When Y finishes with the shared resource, it signals X controling semaphore. This cannot be done with a mutex, which can’t be used for signaling at all.
Conclusion: a mutex can only be used for guarding but a semaphore can be used for signaling or guarding and two semaphores afford ping-pong access to a shared resource, which is effectively guarding and signaling combined.
mutsem.cpp and mutsema.cpp demonstrate POSIX mutex and semaphore. Threads are used but that is not the purpose. For that see thread.cpp. Some of the demos involve threads and others processes. Processes are provided by mutsema.cpp. For examples of System V semaphore, see svsem.cpp.
BUILD
This must be compiled with -D_REENTRANT because it has a thread. It must be
linked with -lpthread or -lrt for thread and for semaphore functions.
NON-PORTABLE
pthread_tryjoin_np and pthread_timedjoin_np are non-portable GNU extensions.
pthread_mutex_timelock may not be implemented in all OS versions.
SUPPORT PROGRAM
Named Semaphore (N) and Proc Mutex (P) demos require support program mutsema.
NAMED SEMAPHORE
Selected by N
This requires the Slave program mutsema. Named semaphores are created in shared
memory and can be accessed by multiple processes. Any number of processes can
call sem_open to link to a named semaphore. The first call creates it and only
this instance can assign it a value. When the semaphore is created, a
corresponding file appears under dev/shm. Calling sem_unlink causes the file to
disappear but the underlying semaphore remains until all processes that have it
open call sem_unlink.
This first demonstrates the appearence and disappearence of the semaphore file
/dev/shm/sem.MySemaphore. The semaphore is not used for anything. Then the same
semaphore is created again and the slave program, mutsema, is spawned (in a
shell) to execute in background so that it and the master program execute
simultaneously. First Slave waits on the semaphore (sem_wait) and Master
releases it (sem_post). Then Master wait and Slave posts. This also
demonstrates that the semaphore file is not required for using the semaphore.
The file disappears when Master unlinks it but Master and Slave continue to
communicate via the semaphore.
Output Trace:
Demonstrate named semaphore creation and destruction. /dev/shm/sem.
MySemaphore appears on creation and disappears on destruction. The semaphore is
not used for anything. Please open a window on /dev/shm
Press any key to continue.
Creating a semaphore named /MySemaphore
Press any key to delete the semaphore
Press any key to continue.
Now the same semaphore is created and used to communicate with another process.
Creating a semaphore named /MySemaphore
Slave begin: Creating a semaphore named /MySemaphore
Slave: waiting on semaphore.
Master: I'm going to unlink the semaphore, causing its file to disappear.
However, the semaphore itself persists until the Slave also unlinks from it.
Press any key to continue.
Master: press any key and I will unlock the semaphore.
Master: Unlocking the semaphore
Slave: somebody woke me up.
Master: Now I'm going to wait on the semaphore
Slave: In 3 seconds I'm going to wake Master. 1 2 3
Master: Slave just woke me up.
Slave: end
Master: end
SEMAPHORE VALUE DEMO
Selected by V
Creates an unnamed semaphore and shows its value increasing with each sem_post
and decreasing with each sem_wait. Then, with value 0, creates several threads
that wait then shows that each post releases a thread, first-come-first-served.
Output Trace:
Demonstrate counting semaphore value.
The initial value is 0
After 3 calls to sem_post, the value is 3
After 1 call to sem_wait, the value is 2
After 2 more calls to sem_wait, the value is 0
I'm going to launch 4 threads, all of which wait on the semaphore.
Begin thread 0
Begin thread 1
Begin thread 2
Begin thread 3
The value is 0
Calling sem_post
End thread 0
Calling sem_post
End thread 1
Calling sem_post
End thread 2
Calling sem_post
End thread 3
THREAD EXCLUSIVE ACCESS
Selected by X
This demonstrates the standard use of mutex to grant exclusive access to a
shared resource but the non-standard use of semaphore to do the same thing.
Main and Thread both compete for multiple access to the shared buffer and,
coincidentally, to the display. Each is delayed between accesses by a time
designed to either allow it another (i.e. two in a row) access or to lose
control to the other.
Output Trace:
Demonstrate thread exclusion by mutex and by semaphore
Main: starting thread for exclusion by mutex
Thread: I now have access.
The message is Initialized
Thread: I now have access.
The message is Thread's count is 0
Main: I now have access.
The message is Thread's count is 1
Main: I now have access.
The message is Main's count is 0
Thread: I now have access.
The message is Main's count is 1
Main: I now have access.
The message is Thread's count is 2
Thread: I now have access.
The message is Main's count is 2
Thread: done
Main: I now have access.
The message is Thread's count is 3
Main: done
The last message is Main's count is 3
The access sequence was 11001010
Main: starting thread for exclusion by semaphore
Thread: I now have access.
The message is Main's count is 3
Thread: I now have access.
The message is Thread's count is 0
Main: I now have access.
The message is Thread's count is 1
Main: I now have access.
The message is Main's count is 0
Thread: I now have access.
The message is Main's count is 1
Main: I now have access.
The message is Thread's count is 2
Thread: I now have access.
The message is Main's count is 2
Thread: done
Main: I now have access.
The message is Thread's count is 3
Main: done
The last message is Main's count is 3
The access sequence was 11001010
PROC MUTEX DEMO
Selected by P
This demo requires the Slave program mutsema. Demonstrates a (global) mutex
used between processes. The same functionality can be achieved with a named
semaphore used as a mutex by creating it signaled, as demonstrated for threads
by demoExclThread. However, a mutex may be more efficient than a semaphore,
depending on CPU and system. This mainly shows how to create and use global
memory. Both proc-local and global mutex are simply memory locations treated as
mutex using the same functions in both cases. This also demonstrates that, with
a POSIX mutex, an unlocking process does not yield to a waiting process. This
requires explicit yield by the unlocker and sched_yield, recommended by POSIX
specifically for this purpose, doesn't work.
Executing demoProcMutex reveals that if a process unlocks and then relocks a
mutex without giving up its time slot, this process is given the lock ahead of
the other process, which is already waiting. printf sometimes is sufficient but
sometimes not, probably related to whether the other process is printing. sleep
seems reliable. POSIX suggests calling sched_yield specifically for this
situation but it doesn’t yield soon enough to prevent the relock. It appears to
increase the chance of coincidental yield but not guarantee yield.
Output Trace:
Demonstrate POSIX mutex shared by procs
Master: opening shared memory file "/MyMemory".
Master: mapping the memory file into my own memory space
Master: initializing process-shared mutex in shared memory
Master: If one process is waiting for the mutex and the other unlocks and
relocks without yielding, Linux/POSIX does not grant the mutex to the waiting
process. First, we will show correct operation with the holder explicitly
yielding to allow the waiting process to lock.
Master: locking mutex
Master: spawning mutsema to run in its own shell in BG.
Slave: opening shared memory file "/MyMemory"
Slave: mapping the memory file into my own memory space
Slave: I'm now going to try to lock the mutex in shared memory.
Master: press any key and I will unlock the mutex.
Slave: now I have the lock.
Slave: I'm going to sleep for a second.
Master: now I will try again to lock the mutex.
Slave: I'm now going to unlock the mutex.
Master: I now have the lock.
Slave: now I'm going to try to lock it again.
Master: Now we show that the unlocking process does not yield to a process
waiting for the mutex. Slave is now waiting while I have the mutex. I will
unlock and then lock it again without deliberately yielding.
Master: I will now unlock and relock the mutex.
Master: I now have the lock. The waiting Slave didn't get the lock. Now I'm
going to unlock the mutex to let Slave finish.
Master: now I'm destroying the mutex
Slave: return from lock. So now I have it.
Slave: done
Master: done
DEMO THREAD YIELD
Selected by Y
Shows that POSIX thread-mutex and unnamed semaphore, like proc-mutex, do not
cause the unlocking thread to yield to another thread waiting for the mutex or
semaphore.
Output Trace:
This demonstrates that with a POSIX thread mutex or semapohre, an unlocking
thread does not yield to another thread waiting for the mutex.
Main: initializing and locking mutex.
Main: launching thread.
Thread: I'm going to wait for mutex.
Main: Now I'm going to unlock and relock the mutex.
** In this case, the signaler (Main) coincidentally yields **
Thread: return from lock. Now I have the mutex.
Thread: done.
Main: I have locked the mutex again. If this message is preceded by one from
Thread, it means a coincidental yield allowed Thread to gain the semphore.
Otherwise I have regained it because I wasn't forced to yield.
Main: Now I will unlock it again and give Thread a chance to finish.
** In this case, the signaler doesn't yield to the waiter **
Main: Now I'm going to unlock and relock the mutex.
Main: I have locked the mutex again. If this message is preceded by one from
Thread, it means a coincidental yield allowed Thread to gain the semphore.
Otherwise I have regained it because I wasn't forced to yield.
Main: Now I will unlock it again and give Thread a chance to finish.
Thread: return from lock. Now I have the mutex.
Thread: done.
*****
Main: next we demonstrate the same behavior with unnamed semaphore used like
mutex (sem post = mutex unlock, sem wait = mutex lock).
Main: initializing unsignaled semaphore.
Main: launching thread
Thread: I'm going to wait for semaphore.
Main: Now I'm going to unlock and relock by posting and waiting on the semaphore.
**** In this case, the signaler (Main) coincidentally yields ****
Thread: return from sem wait. Now I have the lock.
Thread: unlocking by sem post.
Thread: done.
Main: return from sem wait. If this message is preceded by Thread message, it
is due to coincidental yield. If not, I did not yield to the waiting Thread.
Main: signaling semaphore to release Thread
Main: destroying semaphore
** In this case, the signaler doesn't yield **
Main: Now I'm going to unlock and relock by posting and waiting on the semaphore.
Main: return from sem wait. If this message is preceded by Thread message, it
is due to coincidental yield. If not, I did not yield to the waiting Thread.
Main: signaling semaphore to release Thread
Main: destroying semaphore
Thread: return from sem wait. Now I have the lock.
Thread: unlocking by sem post.
Thread: done.
Main: done
ALTERNATING ACCESS DEMO
Selected by A
This demonstrates using two semaphores to control a strictly alternating
conversation between two threads. Each semaphore functions as both guard and
alert. The thread that currently has access to the shared buffer, writes into
it and then posts the other thread’s control semaphore. It then waits on its
own control semaphore. The other thread, which is waiting (or will wait) on its
control semaphore wakes up, reads the buffer, writes its own message and posts
the other thread’s control semaphore.
Output Trace:
Demonstrate alternating thread access to a shared buffer using two semaphores,
each serving as a guard and event signal.
Main: First, I will initialize sem1, which controls me, as signaled and sem2,
which controls the slave, as unsignaled.
Main: now I will launch Thread
Thread: Begin
Thread: waiting on sem2
Main: waiting on sem1
Main: the buffer contains "No message"
Main: now I will write to the buffer and then signal sem2
Thread: the buffer contains "Main: message 1"
Thread: now I will write to the buffer and then signal sem1
Thread: waiting on sem2
Main: waiting on sem1
Main: the buffer contains "Thread: message 1"
Main: now I will write to the buffer and then signal sem2
Thread: the buffer contains "Main: message 2"
Thread: now I will write to the buffer and then signal sem1
Thread: waiting on sem2
Main: waiting on sem1
Main: the buffer contains "Thread: message 2"
Main: now I will write to the buffer and then signal sem2
Thread: the buffer contains "Main: message 3"
Thread: now I will write to the buffer and then signal sem1
Thread: waiting on sem2
Main: waiting on sem1
Main: the buffer contains "Thread: message 3"
Main: now I will write to the buffer and then signal sem2
Thread: the buffer contains "Main: message 4"
Thread: now I will write to the buffer and then signal sem1
Thread: done
Main: done
Demonstrates named semaphore first by simply creating and unlinking to show the coming and going of the associate file. Then the semaphore is created again and used to synchronize the execution of this function and the separate program mutsema, which we launch via system to run simultaneously (in background). The mutsema program is dedicated to this demo.
sem_open creates the semaphore file in /dev/shm. sem_unlink removes this file without affecting the underlying semaphore, which is not deleted until all processes that have opened it have either called sem_close or terminated. If we call sem_open here and immediately call sem_unlink, the file would disappear without any effect on program operation. However, mutsema also must call sem_open, which would recreate the file. To demonstrate semaphore independence from the file, after launching mutsema, we call sem_unlink and finish the demo with the file missing. If we didn’t call sem_unlink at all, the file would remain but only until the computer reboots or it is explicitly deleted by rm command or sem_unlink function.
Illustrates sem_getvalue and effect of counting semaphore. Shows increasing values with each sem_post and decreasing with each sem_wait. Launches threads that wait on the semaphore to show increasing value and then release of threads first-come-first-served with each call to sem_post.
This demonstrates thread exclusion by mutex and by semaphore. demoSemValue ("main") calls exclThread ("thread") twice, first for mutex and then for semaphore. It is the same in both cases but enterExcl and leaveExcl use mutex or semaphore based on global exclUse value. demoSemValue and exclThread have varying (millisecond) sleeps in their loops to control which one gets access. Increasing one’s delay increases the chance that the other will enter the exclusion first. The most informative sequence has instances of each thread getting access twice in a row and interleave. This delay set produces a main-thread access sequence of 11001010 (ttmmtmmt).
thread.cpp #include <stdio.h> #include <unistd.h> #include <stdlib.h> #include <string.h> #include <pthread.h> #include <semaphore.h> #include <sys/stat.h> #include <fcntl.h> #include <ctype.h> #include <errno.h> #include <time.h> #include "cdefs.h" #include "conUtil.h" char message[100]; sem_t unameSem; pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER; void* basicThread( void *arg ) { static char threadExitMsg[] = "Thread exit message"; printf( "Thread: Begin\n" ); sprintf( (char*)arg, "Thread message through arg" ); // Delay here affords Main a chance to print its message before we // print our prompt message but not delaying may provide a better // sense of dispatch uncertainty. sleep(2); conPrompt( (char*)"Thread: press any key to terminate me.\n" ); printf( "Thread: bye\n" ); pthread_exit( (void*)threadExitMsg ); printf( "Thread func-- Why am I still here\n" ); } void demoBasicThread( void ) { pthread_t thrd; void* threadRet; conWidPrint( (char*)"Basic thread demo. Main creates a thread, passing the address \ of a buffer to the thread function, which writes a message into the \ buffer via the argument. Meanwhile, Main tells that the thread has \ been created and then joins the thread. On keypress, Thread \ terminates, returning a message, which Main prints when join returns.\n" ); if( pthread_create( &thrd, 0, basicThread, (void*)message ) != 0 ) { perror( "thread create" ); return; } printf( "Main: Thread creation ok\n" ); sleep(1); // Give Thread's first message a chance to interleave. printf( "Main: Waiting for thread to finish\n" ); if( pthread_join( thrd, &threadRet ) != 0 ) { perror( "thread join" ); return; } printf( "Main: return from join. Thread returned \"%s\", \ wrote into shared buffer \"%s\"\n", (char*)threadRet, message ); } int tryJoin( pthread_t thrd, int secs ) { int cnt; int stat; for( cnt = 1 ; ; ) { stat = pthread_tryjoin_np( thrd, 0 ); switch( stat ) { case 0: printf( "Main: join succeeded.\n" ); break; case EBUSY: printf( "Main: join attempt %d failed.\n", cnt ); if( ++cnt > secs ) break; sleep(1); continue; default: errno = stat; perror( "Main" ); break; } return stat; } } void* joinThread( void *arg ) { if( arg == (void*)2 ) { pthread_detach( pthread_self()); printf( "Thread: begin self-detached\n" ); } else printf( "Thread: begin normal attached thread.\n" ); sleep(1); printf( "Thread: I'm done.\n" ); return 0; } void* joinTimeThread( void *arg ) { printf( "Thread: begin\n" ); for( int cnt = 0 ; cnt < 5 ; cnt++ ) { printf( "Thread: iteration %d\n", cnt ); sleep(1); } printf( "Thread: I'm done.\n" ); return 0; } void demoJoin( void ) { static char* announce[] = { (char*)"Main: Create normal thread and then join.\n", (char*)"Main: Create normal thread. Make unjoinable. Try to join.\n", (char*)"Main: Create thread that will make itself unjoinable.\n", }; static char* announce2[] = { (char*)"Main: create thread for successful timed join.\n", (char*)"Main: create the same thread for timed join with short timeout.\n", }; static int times[] = { 6, 3 }; struct timespec ts; pthread_t thrd; int stat; int idx; conWidPrint( (char*)"Join thread demo\n" ); for( idx = 0 ; idx < 3 ; idx++ ) { conWidPrint( announce[ idx ]); stat = pthread_create( &thrd, 0, joinThread, (void*)idx ); if( idx == 1 ) { printf( "Main: now I will detach the thread.\n" ); pthread_detach( thrd ); } printf( "Main: now I will try to join the thread.\n" ); if( tryJoin( thrd, 5 ) != 0 ) sleep(1); // Give the thread a chance to finish when join is // invalid. Otherwise, the display is interleaved. } for( idx = 0 ; idx < 2 ; idx++ ) { conWidPrint( announce2[ idx ]); stat = pthread_create( &thrd, 0, joinTimeThread, 0 ); clock_gettime( CLOCK_REALTIME, &ts ); ts.tv_sec += times[ idx ]; stat = pthread_timedjoin_np( thrd, 0, &ts ); printf( "Main: timedjoin returned %d. ", stat ); fflush( stdout ); switch( stat ) { case 0: printf( "join succeeded.\n" ); break; case EBUSY: printf( "the thread has not terminated.\n" ); break; case ETIMEDOUT: printf( "timed out.\n" ); // perror shows // this as "Connection timed out" break; default: errno = stat; perror(0); break; } } pthread_join( thrd, 0 ); // Wait for it to finish // to avoid display interleave. } void* canThread1( void *arg ) { printf( "Thread: Begin\n" ); if( arg != 0 ) pthread_setcancelstate( PTHREAD_CANCEL_DISABLE, 0 ); for( int cnt = 1 ; cnt < 7 ; cnt++ ) { printf( "Thread: my count is %d\n", cnt ); sleep(1); } printf( "Thread: I reached the end of my count so I'm self-terminating\n" ); return (void*)"Thread: this is my return message"; } void* canThread2( void* arg ) { printf( "Thread: Begin\n" ); pthread_setcancelstate( PTHREAD_CANCEL_DISABLE, 0 ); for( int cnt = 1 ; true ; cnt++ ) { printf( "Thread: my count is %d\n", cnt ); if( cnt == 4 ) { printf( "Thread: making myself cancelable.\n" ); pthread_setcancelstate( PTHREAD_CANCEL_ENABLE, 0 ); } sleep(1); } return 0; // Will never get here. Either canceled or runs forever. } void demoCancelThread( void ) { pthread_t thrd; int stat; int cnt; void* thrdRet; int idx; conWidPrint( (char*)"Demonstrate thread cancel\n"); // Launch thread, telling it to remain cancelable (arg = 0) then do // it again but telling it to make itself cancelable (arg = 1). for( idx = 0 ; idx < 2 ; idx++ ) { conWidPrint( idx == 0 ? (char*)"First I will launch a thread (default cancelable) and \ cancel it before it can self-terminate.\n" : (char*)"Now I will launch the thread, telling it to make itself \ uncancelable and then try to cancel it.\n" ); stat = pthread_create( &thrd, 0, canThread1, (void*)idx ); if( stat != 0 ) { errno = stat; perror(0); return; } printf( "Main: thread creation ok. In 4 seconds I will cancel it.\n" ); for( cnt = 1 ; cnt < 5 ; cnt++ ) { sleep(1); printf( "Main: %d\n", cnt ); } printf( "Main: I'm cancelling the thread now\n" ); stat = pthread_cancel( thrd ); if( stat != 0 ) { errno = stat; perror(0); } else // pthread_cancel returned 0 i.e. no error. Tests // show that this is the condition even when the thread has made // itself non-cancelable, so this is not an indication that the // thread actually was canceled. printf( "Main: cancel returned no error\n" ); printf( "Main: I'm now going to join the thread\n" ); stat = pthread_join( thrd, &thrdRet ); printf( "Main: join returned %d with thread return %p\n", stat, thrdRet ); } conWidPrint( (char*)"Main: now I'm launch a thread that makes itself \ non-cancelable for awhile and then makes itself cancelable again. \ Meanwhile, we repeatedly try to cancel it.\n" ); pthread_create( &thrd, 0, canThread2, 0 ); sleep(1); // This is needed to make sure that the thread // has a chance to make itself uncancelable. Sometimes this // works without the sleep but not always. for( idx = 1 ; idx < 10 ; idx++ ) { printf( "Main: (%d) try to cancel.\n", idx ); pthread_cancel( thrd ); stat = pthread_tryjoin_np( thrd, 0 ); if( stat == 0 ) { printf( "Main: joined thread.\n" ); break; } if( stat != EBUSY ) { errno = stat; perror( "Main" ); break; } sleep(1); } } void* canThreadSem( void *arg ) { printf( "Thread: Begin. I'm going to wait on semaphore now.\n" ); sem_wait( &unameSem ); printf( "Thread: return from sem wait.\n" ); // This is // wrong unless master has signaled. sleep(2); // For the cancel while thread has semaphore case. printf( "Thread: End\n" ); return 0; } void* canThreadMut( void *arg ) { printf( "Thread: Begin. I'm going to wait on mutex now.\n" ); pthread_mutex_lock( &mutex ); printf( "Thread: return from procure mutex. This means I wasn't canceled.\n" ); pthread_mutex_unlock( &mutex ); return 0; } void demoCancelWait( void ) { pthread_t thrd; int val; conWidPrint( (char*)"Cancel a thread that has taken the semaphore.\n" ); sem_init( &unameSem, 0, 1 ); sem_getvalue( &unameSem, &val ); printf( "Main: sem value is %d. I'm now launching the thread.\n", val ); pthread_create( &thrd, 0, canThreadSem, 0 ); sleep(1); sem_getvalue( &unameSem, &val ); printf( "Main: sem value is %d. I'm now cancelling the thread\n", val ); pthread_cancel( thrd ); sleep(1); sem_getvalue( &unameSem, &val ); printf( "Main: sem value is now %d\n", val ); conWidPrint( (char*)"Cancel a thread that is waiting on semaphore.\n" ); printf( "Main: initialize semaphore and then create the thread.\n" ); sem_init( &unameSem, 0, 0 ); pthread_create( &thrd, 0, canThreadSem, 0 ); sleep(1); if( tryJoin( thrd, 3 ) != EBUSY ) return; // joined or some kind of error // (not EBUSY). Either is unexpected. printf( "Main: now I'm going to cancel the thread and try again to join it.\n" ); pthread_cancel( thrd ); if( tryJoin( thrd, 3 ) != 0 ) return; // Not joined is unexpected error. sem_destroy( &unameSem ); conWidPrint( (char*)"Main: Next, we try to cancel a thread waiting on a mutex.\n" ); pthread_mutex_lock( &mutex ); pthread_create( &thrd, 0, canThreadMut, 0 ); sleep(1); if( tryJoin( thrd, 3 ) != EBUSY ) return; // joined or some kind // of error (not EBUSY). Either is unexpected. printf( "Main: I will try to cancel the thread and join it.\n" ); if( pthread_cancel( thrd ) != 0 ) printf( "Main: thread cancel failed.\n" ); else if( tryJoin( thrd, 3 ) != 0 ) printf( "Main: I still can't join the thread.\n" ); printf( "Main: I'm going to release the mutex and try again to join the thread.\n" ); pthread_mutex_unlock( &mutex ); tryJoin( thrd, 3 ); } int main( int argc, char **argv ) { int ch; conOutPrep(); conSetRawInput( true ); while(1) { conWidPrint( (char*)"Press q=quit, t=basic thread, j=join, c=cancel \ thread, w=cancel waiting thread\n"); ch = conPrompt( STR0 ); switch( toupper( ch )) { case 'Q': case EOF: conSetRawInput( false ); return 0; case 'T': demoBasicThread(); break; case 'J': demoJoin(); break; case 'C': demoCancelThread(); break; case 'W': demoCancelWait(); break; default: break; } } return 0; }
Demonstrates various POSIX threads.
BUILD
This must be compiled with -D_REENTRANT because it has a thread. It
must be linked with -lpthread or -lrt for thread and for semaphore
functions.
thread: conUtil.o thread.o
$(CCR) -o $@ $^ -lrt
NON-PORTABLE
pthread_tryjoin_np and pthread_timedjoin_np are non-portable GNU
extensions. pthread_mutex_timelock may not be implemented in all OS
versions.
SEMAPHORE AND MUTEX
The basic thread, join, and cancel demos do not use semaphore or
mutex. The cancel waiting thread demo uses both semaphore (unnamed
POSIX) and mutex to demonstrate that a thread waiting on a semaphore
is cancellable but one waiting on a mutex is not.
BASIC THREAD DEMO
Selected by T
Basic thread demo. Main creates a thread, passing the address of a
buffer to the thread function, which writes a message into the buffer
via the argument. Meanwhile, Main tells that the thread has been
created and then joins the thread. On keypress, Thread terminates,
returning a message, which Main prints when join returns.
Output Trace:
Main: Thread creation ok
Thread: Begin
Main: Waiting for thread to finish
Thread: press any key to terminate me.
Thread: bye
Main: return from join. Thread returned "Thread exit message", wrote
into shared buffer "Thread message through arg"
JOIN THREAD DEMO
Selected by J
This comprises five tests. The first three refer to a thread that
can, by argument, make itself unjoinable. First the thread is
launched as normal (joinable) and Main successfully joins it. Then
Main launches the thread and calls pthread_detach to make it
unjoinable. pthread_join immediately returns "Invalid argument" when
Main tries to join. For the third test, Main launches the thread with
the argument telling it to make itself unjoinable. The thread calls
the same pthread_detach function, passing its own thread id (from
pthread_self) and again the join function returns "Invalid argument".
The thread terminates itself in both of these cases. The last two
tests involve a thread that just counts and terminates itself. Main
launches it twice and calls pthread_timedjoin-np, with a long timeout
the first time and a short one the second. With the long timeout,
join succeeds. It fails on the short timeout.
Output Trace:
Join thread demo
Main: Create normal thread and then join.
Thread: begin normal attached thread.
Main: now I will try to join the thread.
Main: join attempt 1 failed.
Thread: I'm done.
Main: join succeeded.
Main: Create normal thread. Make unjoinable. Try to join.
Thread: begin normal attached thread.
Main: now I will detach the thread.
Main: now I will try to join the thread.
Main: Invalid argument
Thread: I'm done.
Main: Create thread that will make itself unjoinable.
Thread: begin self-detached
Main: now I will try to join the thread.
Main: Invalid argument
Thread: I'm done.
Main: create thread for successful timed join.
Thread: begin
Thread: iteration 0
Thread: iteration 1
Thread: iteration 2
Thread: iteration 3
Thread: iteration 4
Thread: I'm done.
Main: timedjoin returned 0. join succeeded.
Main: create the same thread for timed join with short timeout.
Thread: begin
Thread: iteration 0
Thread: iteration 1
Thread: iteration 2
Main: timedjoin returned 110. Connection timed out (or my "timed out").
Thread: iteration 3
Thread: iteration 4
Thread: I'm done.
CANCEL THREAD DEMO
Selected by C
This comprises three tests with two different threads. The first thread's arg
allows main to tell it to leave its default cancelability or make itself
uncancelable. Main first launches it as cancellable and successfully cancels
it, as demonstrated by immediately successful join. Then main creates the
thread again but with the arg telling it to make itself uncancelable. Main is
not able to cancel it this time and the join fails until the thread reaches its
own end count and terminates itself. In the third test, the thread (with a
different function) unconditionally makes itself uncancelable and begins
counting. At a certain count it makes itself cancelable again. Meanwhile, Main
continually tries to cancel and join it but the join succeeds only after Thread
announces that it is making itself cancelable.
Output Trace:
Demonstrate thread cancel. First I will launch a thread (default
cancelable) and cancel it before it can self-terminate.
Thread: Begin
Thread: my count is 1
Main: thread creation ok. In 4 seconds I will cancel it.
Thread: my count is 2
Main: 1
Thread: my count is 3
Main: 2
Thread: my count is 4
Main: 3
Thread: my count is 5
Main: 4
Main: I'm cancelling the thread now
Main: cancel returned no error
Main: I'm now going to join the thread
Main: join returned 0 with thread return 0xffffffff. Now I will
launch the thread, telling it to make itself uncancelable and then
try to cancel it.
Thread: Begin
Thread: my count is 1
Main: thread creation ok. In 4 seconds I will cancel it.
Thread: my count is 2
Main: 1
Thread: my count is 3
Main: 2
Thread: my count is 4
Main: 3
Thread: my count is 5
Main: 4
Main: I'm cancelling the thread now
Main: cancel returned no error
Main: I'm now going to join the thread
Thread: my count is 6
Thread: I reached the end of my count so I'm self-terminating
Main: join returned 0 with thread return 0x804a388
Main: now I'm launch a thread that makes itself non-cancelable for
awhile and then makes itself cancelable again. Meanwhile, we
repeatedly try to cancel.
Thread: Begin
Thread: my count is 1
Thread: my count is 2
Main: (1) try to cancel.
Thread: my count is 3
Main: (2) try to cancel.
Thread: my count is 4
Thread: making myself cancelable.
Main: (3) try to cancel.
Main: joined thread.
CANCEL WAITING THREAD DEMO
Selected by W
First show that canceling a thread that has taken a semaphore does not affect
the semaphore value. Next, test canceling a thread waiting for semaphore and
one waiting to procure mutex. In both tests, we first try to join to show that
the thread is not done. Then we try to cancel the thread and then again to
join. The thread at sem_wait cancels and the join succeeds. The thread at
pthread_mutex_lock does not cancel and the join fails again. Only after Main
releases the mutex does the thread unblock and tells us that the intended
result has not occured. It then terminates itself and Main is finally able to
join it.
Output Trace:
Cancel a thread that has taken the semaphore.
Main: sem value is 1. I'm now launching the thread.
Thread: Begin. I'm going to wait on semaphore now.
Thread: return from sem wait.
Main: sem value is 0. I'm now cancelling the thread
Main: sem value is now 0
Cancel a thread that is waiting on semaphore.
Main: initialize semaphore and then create the thread.
Thread: Begin. I'm going to wait on semaphore now.
Main: join attempt 1 failed.
Main: join attempt 2 failed.
Main: join attempt 3 failed.
Main: now I'm going to cancel the thread and try again to join it.
Main: join attempt 1 failed.
Main: join succeeded.
Main: Next, we try to cancel a thread waiting on a mutex.
Thread: Begin. I'm going to wait on mutex now.
Main: join attempt 1 failed.
Main: join attempt 2 failed.
Main: join attempt 3 failed.
Main: I will try to cancel the thread and join it.
Main: join attempt 1 failed.
Main: join attempt 2 failed.
Main: join attempt 3 failed.
Main: I still can't join the thread.
Main: I'm going to release the mutex and try again to join the thread.
Thread: return from procure mutex. This means I wasn't canceled.
Main: join succeeded.
demoBasicThread and basicThread together demonstrate simple threads
with pthread_create
, pthread_join
, and
pthread_exit
(in the thread function). basicThread is a
simple thread that just announces itself and writes a message to the
buffer it assumes the arg points to. Then it waits for the user to
press a key and then returns (pthread_exit
and return
are the same thing at this level) with a pointer to its static exit
message.
demoJoin illustrates several aspect of thread join. First, it invokes joinThread three times: with normal join, with our making it unjoinable, and with it making itself unjoinable. Then it invokes joinTimeThread two times; first followed by a successful timed join (timeout longer than thread duration) and then followed by an unsuccessful time join due to shorter timeout.
tryJoin is used by both the join demo and the cancel waiting thread
demo. It calls pthread_tryjoin_np
one or more times,
returning immediately if successful or error. If EBUSY
then continue up to the given limit, sleeping for one second after
each attempt. Print a message at each attempt and at successful join
or error.
joinThread supports demoJoin. It prints begin and done with
intervening sleep, but optionally detaches self to illustrate
joinability. The single argument void *arg is used to select detach.
If its value is 2 then calls pthread_detach
to make
itself unjoinable. This peculiar value is used because it simplifies
the demoJoin main thread, which launches this thread several times
and only one time in the middle of the series do we want this thread
to make itself unjoinable.
joinTimeThread supports demoJoin. Iterates print and sleep loop five
times. Meanwhile, the main thread calls
pthread_timedjoin_np
with timeout within or beyond the
duration of this thread.
demoCancelThread demonstrates cancelability. demoCancelThread calls canThread1 twice. First with arg = 0 so that the default cancellable state is not changed, then with arg != 0 so that the thread changes itself to non-cancelable. It never changes this so it cannot be canceled but, after reaching its end count, it self-terminates. canThread2 participates in thread cancel demo. Makes itself uncancelable then counts for awhile and then makes itself cancelable. Meanwhile, Main tries repeatedly to cancel.
demoCancelWait demonstrates that a thread waiting for semaphore
(sem_wait) can be canceled but one waiting to procure a mutex
(pthread_mutex_lock
) cannot be. canThreadSem is the thread to
show cancel wait waiting on semaphore. canThreadMut is the thread to
show failed attempt to cancel thread waiting on mutex.