Thank you for the suggestion. Consistent hashing sounds promising. The
number of files I would have to redistributed is limited if some servers
failures.
On Sun, Nov 27, 2011 at 10:42 PM, Allan Wind <allan_wind#lifeintegrity.com>wrote:
> On 2011-11-26 01:30:41, Rerngvit Yanggratoke wrote:
> > We have over three millions of files. Each static file is rather
> > small (< 5MB) and has a unique identifier used as well as an URL. As a
> > result, we are in the second case you mentioned. In particular, we should
> > concern about if everybody downloads the same file simultaneously. We
> > replicate each file at least two servers to provide fail over and load
> > balancing. In particular, if a server temporary fails, users can retrieve
> > the files kept on the failing server from another server.
>
> In order for haproxy to route the request correctly it needs to
> know, per url, what the two backend servers should be. Or needs
> to fail on the server that is temporarily down (make sure you
> define that, and haproxy has the same understanding) and reroute
> traffic the server that is up. Do you care about the request
> that sees the first failure?
>
> I do not know enough about haproxy yet to determine whatever
> either option is available.
>
> If you replicate all files on server a to server b, then each server
> needs 200% capacity to handle failover. If you replicate 3 times
> it would be 150% and if you replicate a given resource to a
> random via a consistent hash you get much better behavior. Make
> sure you consider hot spots.
>
>
> /Allan
> --
> Allan Wind
> Life Integrity, LLC
> <http://lifeintegrity.com>
>
>
-- Best Regards, Rerngvit YanggratokeReceived on 2011/12/04 08:28
This archive was generated by hypermail 2.2.0 : 2011/12/04 08:30 CET