Re: Timeouts when all backends are down

From: Willy Tarreau <w#1wt.eu>
Date: Tue, 23 Sep 2008 15:21:13 +0200


On Tue, Sep 23, 2008 at 11:47:03AM +0200, Alexander Staubo wrote:
> On Tue, Sep 23, 2008 at 6:55 AM, Willy Tarreau <w#1wt.eu> wrote:
> > You're not doing anything wrong. It is simply that haproxy knows that
> > your servers are down because they have been checked and detected as
> > such. So once it receives a request and has no way to serve it, it
> > immediately returns "503 service unavailable".
>
> Ouch. I seem to have misunderstood how HAProxy handles this particular
> situation.
>
> Is there a way to have HAProxy keep attempting to reconnect instead of
> giving up? The idea is that our users should not experience any 503s
> while we temporarily take down Mongrels.

you could increase the check "fall" timer so that it takes a longer time detecting them as down, and increase the "retries" parameter in order to require more attemps. But quite honnestly, I don't find this a clean HA mechanism. If you have multiple instances, you should at least cut them in half while upgrading, that's what I've seen everywhere and it's not that hard.

> I suppose I could write a magic backup server that simply slept a
> little bit and then redirected back to its own URL, but that would be
> pretty icky.

You could even do something dirtier : a non-existing backup server (say IP 1.1.1.1) with no check and no cookie, so that when haproxy fails to connect to it, it finally retries on another server.

But as I said, I find this very dirty.

Willy Received on 2008/09/23 15:21

This archive was generated by hypermail 2.2.0 : 2008/09/23 15:30 CEST