Socket data read wait time
I have application where I am listening on multiple sockets using select. If I start processing request that came in from Socket A and in the meanwhile if another request on socket B arrives then I want to know how long does socket B request had to wait before I could get it. Since this is a single threaded application I cannot spawn a new thread and go back to select to monitor again and instantly start processing request from socket B.
Is there a 'C' api available to get me this metric or is this just not possible to get?
There is no a straightforward way how to measure the interval between the 'data ready' time and 'data read' time because there is not any timestamp written together with the data. Moreover the situation is even more complex because a stream oriented socket may receive several data segments till select is closed and the it is not what interval should be measured.
If the application data processing is longer than packet processing in the kernel the you can do a reasonable measurement in following way:
- print current time and some unique data id based on application protocol when select wakes up due to socket B data availability.
- log any packet received for the socket B. You can use either a network traffic capture tool like wireshark or tcpdump. Or you can configure an iptables firewall rule (if it is running on linux) with target -j LOG.
- Write a simple script/program that correlates the captured packets and the application log and subtract received and start processing time.
Of course the idea above does ignore the kernel processing time. If you really need exact time I have to introduce a new thread to your application.