On Wed, Dec 12, 2007 at 03:36:56PM -0500, Martin Goldman wrote:
> I did a few throughput tests using iperf between servers and consistently
> got a result of about 725Mbps -- it's not 1000, but it's a lot more than 320
> at least. Is that a reasonable test?
The test is reasonable, but the result very much indicates a 32-bit, 33 MHz PCI NIC ! The 320 Mbps average limit would be for the proxy, because the same data passes twice on the bus, and if it's a shared PCI bus for both NICs, you divide the maximum performance by two because the bus is already saturated.
PCI-X NICs normally give very good results. Most often, on-board NICs are connected to 32-bit, 100 MHz PCI busses, which sets the limit to a theorical 3 Gbps in+out but practically just enough to saturate in+out at 1 Gbps in each direction.
Recent machines using PCI-Express achieve much higher performance because the PCIe bus is point-to-point, and the lowest speed (x1) moves 2 Gbps in each direction, fairly enough for a 1 Gbps NIC. But sometimes you can enounter buggy chips such as some early Marvell 88e8053 with which I had a *lot* of problems. The best GigE NIC I found to date was the intel Pro 1000, both in PCI-X and PCIe flavours.
> hdparm is reporting a "Timing buffered disk reads" value of about 50MB/sec
> for my disks (which are SATA). So it seems like it might be reasonable for
> the individual web servers to max out at 40-something MB/sec.
Yes, possibly.
> What I don't quite understand is, is haproxy actually hitting the disk?
Not at all. It does not even know how to access any file on the disk once it has read its configuration and written its pid file.
> If not, it would
> seem the cluster should be able to handle more requests than the web servers
> individually. Does that make sense?
Yes, that makes a lot of sense. So this would mean that *at least* the machine you used for the load balancer is limited, because when the traffic passes through it, the performance is halved.
You may want to check your BIOS. While I don't see any valid reason for it, it might be possible that you can set the bus frequencies there, or that you have some sort of "legacy mode" for PCI busses which would explain the lower performance.
Regards,
Willy
Received on 2007/12/12 22:23
This archive was generated by hypermail 2.2.0 : 2007/12/12 22:30 CET