Increasing the load capacity on Gateway
I am trying to send 10,000 XML requests continuously in while() loop from Client to the Gateway(which acts as a server to this client) over UDP transmission. The Gateway implements select() function call to monitor the read_fds. In Gateway, the struct timeval values that i am passing to select() are:
tv.tv_sec = 5; tv.tv_usec = 0;
each XML request is of 1500 bytes and both client and Gateway are coded in C++ and the binaries run on Linux (RHEL 5)
There are two cases:
case 1: At Client side, if i send 10,000 XML requests continuously in while() loop and implement a delay of 500 micro seconds using usleep() between each requests, the Gateway accepts all the 10,000 requests, parses it and logs the requests into a .log file.
case 2: At Client side, if i send 10,000 XML requests continuously in while() loop without any delay, the Gateway is accepting only 2,600 requests, parses it and logs the requests into a .log file.
Question: How can i increase the no. of requests accepted by the Gateway, without implementing a delay at the Client side? Also please tell me what happens to the remaining 7,400 requests from Client in case 2, are they lost?
If the server side receive buffer isn't read fast enough, the remaining messages are indeed lost: that is how UDP works.
If you just need to handle a burst of 10000 messages (and not cope with sustained traffic), you could just increase the buffer size: sysctl -w net.core.rmem_max=nnnnnnn.
Alternatively start profiling where the time is spent in your server read loop. You could e.g. remove all the parsing and logging as a test, and just count the number of messages you receive: If that helps you reach a number closer to 10000, then it would imply the parsing and logging is too slow for that loop.
Another thing to check would be to look at which messages you are losing: if you get constant message-loss even in the early messages (e.g. in the first hundred messages) then that would imply that something else on the way cannot handle messages that fast -- the receive buffer on the server is not to blame in that case.