>
> I thought that memcached was often load balanced using the
> client library but I'm far from an expert.
That is correct. For memcached sharding to work, the selection of the target memcached needs to be based on the key of the data being read/written. We use the balancing built into the PHP client library because of this, balancing over a couple of dozen of memcached instances. Note that that library does *not* behave well when servers are down -- it will just say "no data in cache" so you will need to have a quick method of spinning up another memcached instance if one goes down.
I've considered proxying memcached using some smart proxy (which would have
to be custom implemented), but there's almost no win to be had there, for
two reasons:
1) Once the memcached instance is down, the data is gone (I believe this is
true also for membase if you lose the box, rather than the process...)
2) The main limiting factor for us on memcached is the network link. You can
do at most 100 MB/s over a gigabit Ethernet link. If your top-of-rack
switches are in turn just one or a few gigabit channels, then split over a
rack, you get even less per machine. If you were to funnel all memcached
traffic through a single host, then that host would be network saturated,
even if it was on 10 gig links! (In our admittedly pretty large cluster)
If you want true redundancy in a key/value store, you want something like Riak, but that performs orders of magnitude slower than memcached, because of the redundancy.
Sincerely,
jw
Jon Watte, IMVU.com
We're looking for awesome people! http://www.imvu.com/jobs/
On Tue, Apr 26, 2011 at 11:33 AM, John Marrett <johnf#zioncluster.ca> wrote:
> Richard,
>
> > interesting - so balance source goes to the same targ server even if the
> > target server is down?
>
> Once the target server is declared down requests should no longer be
> sent to it. I would expect that with the example you showed all
> traffic would be sent to a single memcached server, if that server
> were to fail all traffic would then be re load balanced to the same
> failover server.
>
> > What would be the best way to balance this ? not to use haproxy for
> > memcache and have the php memcached lib handle the balancing ? Basically
> all
> > the services that our appplications depend upon we are going to move to
> be
> > proxied with haproxy. Like elastic search is a cluster, but if you point
> any
> > one of them data is known by all members. I just memcached that is giving
> us
> > issue if a server fails. would Roundrobin better or just leave memcached
> out
> > of haproxy ?
>
> As I previously stated I don't have that much knowledge of memcached.
> My understanding is that a request for a given key/value pair should
> always be sent to the same memcached server and that the access to
> multiple memcached servers and distribution of load across them is
> typically handled by the client library.
>
> > If that dies off any requests that was using that one server will fail
> until we clear the browsers cache.
>
> I'm quite confused by this statement from your original email that I
> had missed. How does clearing the browser cache affect accesses by the
> php application to memcached via localhost?
>
> -JohnF
>
>
Received on 2011/04/26 20:54
This archive was generated by hypermail 2.2.0 : 2011/04/26 21:00 CEST