Hi Willy
On Mon, 2009-08-03 at 09:21 +0200, Willy Tarreau wrote:
> why are you saying that ? Except for rare cases of huge bugs, a server
> is not limited in requests per second. At full speed, it will simply use
> 100% of the CPU, which is why you bought it after all. When a server dies,
> it's almost always because a limited resource has been exhausted, and most
> often this resource is memory. In some cases, it may be other limits such
> as sockets, file descriptors, etc... which cause some unexpected exceptions
> not to be properly caught.
We have a problem that our servers open connections to some 3rd party, and if we get too many users at the same time, they get too many connections.
> I'm well aware of the problem, many sites have the same. The queuing
> mechanism in haproxy was developped exactly for that. The first user
> was a gaming site which went from 50 req/s to 10000 req/s on patch days.
> They too thought their servers could not handle that, while it was just
> a matter of concurrent connections once again. By enabling the queueing
> mechanism, they could sustain the 10000 req/s with only a few hundred
> concurrent connections.
If that is the case, I will try the same and only limit max connections and see, what will happen. If that will actually work, I will have much simpler situation to handle.
Thank you for now, you have been very helpful.
Best regards
Bostjan Received on 2009/08/05 17:52
This archive was generated by hypermail 2.2.0 : 2009/08/05 18:00 CEST