Hi,
On Sat, Dec 05, 2009 at 12:11:54AM +0100, XANi wrote:
> Dnia 2009-12-04, pi?? o godzinie 17:57 -0500, Naveen Ayyagari pisze:
> > The issue we have is that our scripts are dependent on external resources, so php execution time can vary wildly.
(...)
>
> Yes i meant processor cores, basically if you have extreme cases like 80
> processes on 8 cores then imo its better to use less processes and queue
> reqests in proxy (too much content switching is bad thing for
> performance), but if in your case its just because php "waits for
> something" and not because server is overloaded it wont change much. You
> might want to consider checking if other http servers liek lighttpd also
> have that "bug"
If you are fetching data from external resources, you may want to split the access between 2 distinct haproxy backends (which might very well point to the same servers). That implies you know what URLs remain local and which ones fetch remote data. Then you can proceed like this :
frontend www
acl remote_content path_beg /x/y/z use_backend bk_remote if remote_content default_backend bk_local backend bk_local timeout server 5s server www1 1.1.1.1 maxconn 100 check backend bk_remote timeout server 50s server www1 1.1.1.1 maxconn 5 track bk_local/www1
That way, you allow more time for remote resources, but you don't permit them to fill your queues, as they have a dedicated queue and maxconn.
It's a very basic QoS principle but it works very well because you prevent expensive processing from saturating your servers.
Regards,
Willy
Received on 2009/12/08 06:24
This archive was generated by hypermail 2.2.0 : 2009/12/08 06:30 CET