|
Posted by Colin McKinnon on 07/16/06 21:11
writeson@charter.net wrote:
> I'm wondering if anyone has tried a scenario that I'm thinking of. At
> my job we've got a web based product provided by Apache running PHP
> that accesses MySQL. This web application is hosted by multiple servers
> behind a load balancer because of the user load on the system. However,
> we've still had times when the servers got over run and Apache maxes
> out on the number of httpd processes (257) and falls behind to the
> point of timing out. When this happens and I look at the servers with
> top, they aren't particulary busy, but with 257 big httpd process (PHP
> and MySQL totalling 15 Megs of ram), the server is bound up.
>
Whoa, you're rather jumping the gun aren't you? Picking a solution before
you really know what the problem is.
Using an LAMP stack, when it reaches a performance limit at any point in the
stack, the system rapidly saturates and throughput takes a nosedive. Now,
this can happen because of multiple reasons.
I would recommend that you start monitorning the system at all times and
chart the number of httpd processes (if possible the number of active httpd
processes) vs memory used to check that you've got max processes at its
optimal value (i.e. not resulting in lots of paging).
You should also be logging mysql slow queries, along with counting the
number of connections to the mysqld server and the length of the mysql
processlist. Slow queries should be fixable. A long processlist can be more
tricky but could still point to saturation on the DB as the problem.
Generally getting the schema right gives far greater performance benefits
than tinkering with the hardware - but the latter can be an issue on some
configurations.
Also watch out for plain bad php - unfortunately I don't know any way of
measuring this other than going through the most popular and the slowest
urls by hand to see if they can be made faster. You are using a PHP
accelerator aren't you?
Note that a lot of your memory is probably being used up by apache processes
just hanging around while data gets fed down a slow socket. While you can't
speed up somebody elses network connection you can reduce the load by
moving it away from the httpd processes. It is this where using Tux would
be of benefit - it would use less memory per connection. However splitting
the traffic might not be that simple. More conventionally one would put a
reverse proxy in front of the webservers - squid is a good choice. There's
also an apache module designed specifically for handing over the datafeed
to a less memory hungry process - the name escapes me for now but its
probably easy to find on google.
> behind a load balancer
Oh dear. This always sets the alarm bells ringing for me. It usually means
that somebody has got a vendor certificate and somebody got some lunches
bought for them.
I could go on for hours. But I'd have to charge you for it.
Try to work out where your problem is first.
C.
[Back to original message]
|