On Wed, Dec 12, 2007 at 08:20:07AM -0500, Martin Goldman wrote:
> Thanks again for your help, Willy.
>
> It looks like you were right on the keepalive issue. When I tried this,
> requests per second on my tiny file doubled to about 35,000 on the cluster.
cool! And on the individual servers ?
> Requests per second on the 100K file were basically unchanged, however.
>
> I tried copying a 512MB file between two of the servers involved and the
> throughput I received was about 45MB/sec. I understand that theoretically
> one should be able to achieve 125MB/sec over GigE, but I'm not sure what one
> could expect to get in a real-world scenario. I suppose I should investigate
> that more.
On a few concurrent sessions (around 10, just to compensate for the small amount of possible dead time), you should get 118600000 bytes/s of payload. Ensure that you're not saturating the disks on the server side. For such a test, you should put the files in RAM (either cached, or on a tmpfs).
Welcome to the world of high performance benchmarks ;-)
Cheers,
Willy
Received on 2007/12/12 20:28
This archive was generated by hypermail 2.2.0 : 2007/12/12 20:30 CET