How do I determine if I'm done read()ing from a UNIX socket?
I've been reading the man page for read(2) and according to the man page, the return value of read(2) is -1 on error, 0 on EOF, and > 0 for the number of bytes read.
How do I tell when the write(2) on the client is finished ?
I ask because I'm writing a server and a client to test it, but when I read(2) the first time and loop around to check for more, read(2) blocks and waits for another write(2) from the client (which isn't coming because my client only has the one write(2)).
If I'm just missing something simple, could somebody kindly point it out or point me to a good reference ?
The client needs to close the socket when it's done. Once the client has closed the socket, the server will received the EOF message. If the client leaves the socket open but never writes to it, the server will wait forever for another message to come down.
You know you are done when read returns:
- -1 - An error occurred
- 0 - EOF
- Another, non-zero value - This is the most common case. You protocol needs to specify the size of its messages using either a header specifying message size or fixed-length messages. Then you can keep track of the number of bytes read, and you know you are done once you have read that many bytes. Otherwise you need to keep reading bytes until you have received a full message.
There is no correspondence between reads and writes. A single client write might take you 10 reads to fully load. Unless you put a delimiter in the content, you have no way of knowing where the end is, unless the socket is closed.
If you still need to send a response to the client, you can half close the socket on the client:
Which will cause the read on the server to return 0.
Something to look at would be to use the select() first to check for data, and specify a suitable time out period, rather than just blindly calling read(). This will allow your server to close the socket after a set period of inactivity, and free any resources it might be use, etc.