Hi,
On Thu, Jul 22, 2010 at 08:16:30AM -0700, David Birdsong wrote:
> On Wed, Jul 21, 2010 at 8:00 PM, <billforums-haproxy#yahoo.com> wrote:
> > Hello,
> >
> > I'm curious if anyone can describe high-end deployments of HAProxy that are
> > in use today.
> >
> > We are taking a look at HAProxy as a "build your own load balancer" solution
> > for our application which handles a very high rate of very small requests
> > (average 0.5k to 2k each). We would like to know if it can reliably handle
> > loads of 10,000 to 100,000 QPS for long durations without failing, and
> > without adding significant latency to the application stack.
> >
> > Does anyone use HAProxy under this kind of extreme load? If you have to
> > protect the name of your project or company, that's ok. I'd like to
> > consider this a semi-anonymous survey of the HAProxy mailing list's
> > subscribers to learn about sizes/loads of their deployments.
> >
> > Any information about general reliability, how long you have been using
> > HAProxy and if you are happy that you did would be very helpful.
> >
> haproxy is extremely reliable. you will find that it uses very few
> resources itself.
In fact, since it allocates what it uses, once it has worked a few minutes with the peak load, there's no reason for it to stop working nor slow down (unless you hit a bug, of course). It's common to see stats page report multi-billion hits (the connection counter had to be switched to 64 bits because some users were wrapping it every few days). Uptimes generally rank in the 10-100 days eventhough versions are not often up to date. Two, maybe three crashes have been reported in the last two years, all due to bugs that got fixed. So yes, we can say it's reliable for long durations.
> most of the issues one runs into is simply the
> system overhead that follows any 10k qps service. interrupts are
> always our limiting factor which haproxy has very little control of
> affecting.
In fact there are ways to save a few packets in the exchanges so that it results in slightly less system overhead, so slightly more perf.
At least two users on the list are in the 10-20k average cps range, with peaks to 20-30k for one.
One user has measured a limit of 38 kcps which closely matches the 42k I reach on my lab with the 10gig cards. At these rates, the system has to handle approxy 400 kpps in each direction, so any bit of tuning on the TCP stack helps (disabling netfilter, lvs, even IPv6). Enabling the "defer-accept" bind option saves a bit of CPU too, as well as making haproxy choose itself the source port when connecting to servers.
It's possible to use several NICs and bind together a NIC, a core and a process, and have some front stateless load balancing (using etherchannels or multiple incoming routes). The goal is to avoid any inter-CPU exchange to maintain a high packet rate. But the setup becomes really tricky at some point, and most users prefer to install two-three cheap machines than engaging in such complex setups.
BTW, when we're talking about those rates, those are always complete connections that go to the server. Terminating them on haproxy is faster (less network overhead) and lab tests show that 100k HTTP redirects per second can be reached for instance.
Just a quick question : do you really need layer7 LB at that rate ? Would layer4 not be enough ? Or in other words, what features of the L7 LB are you planning to stress ? ;-)
Regards,
Willy
Received on 2010/07/23 20:43
This archive was generated by hypermail 2.2.0 : 2010/07/23 21:00 CEST