Very cool!

So I just tried your tool and it just hangs, I see you're sending close requests, is this configurable to keep-alive, or even better, nothing? In Http/1.1 keep-alive/close is better not used at all, never try to enforce this as it is not mandatory.

A lot of servers just ignore the close and don't close the connection (like the one I am using) so this can be the issue I am having.

Cool, thanks for trying it.

Try the -shutwr option if the server doesn't close the connection itself. I used it to test lots of exotic implementations and there are weird things going on in overload situations and around connection management. NodeJS for example started dropping connections on localhost(!!) on high load.

The tool was built for high values of keepalive requests, if the server is too fast just use more requests, e.g. -n 1000000 or something similar. Unfortunately some servers close keepalive connections after quite few requests, nginx has a default of 1000 for example.

This is just a simple tool I hacked together as a student to collect some data, didn't spend any time making it more accessible/user friendly, sorry.

I ran into some lua erros and fixed them, eventually I got it running with -shutwr but the results are basically impossible

----------- Summary ---------- Successful connections: 8 out of 8 (0 failed). Total bytes sent . . . . . 2599999960.00 B Total bytes received . . . 82520.00 B Benchmark duration . . . . 85.94 ms Send throughput . . . . . 30252779546.89 B/sec Receive throughput . . . . 960176.69 B/sec Aggregate req/second . . . 93085476.96

The received data is too low. Also 93 million requests per second, the only way this is possible is due the fact that the load generator is not waiting for the server response and processing it. But I guess this is expectable since there might be some issues as I am using a much more recent kernel than you did when building this

I used -n 10000000 (10M)