[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [Pulp-list] gofer hogging CPU

Hash: SHA1

That was written to my pulp.log every second from 9:50 until 10:09. I'm
assuming that's when the WSGI process crashed (I was at lunch so I
wasn't using the server).

I'm with you on a stack trace, but that's all the logging that was
provided. If I see it again I'll do the thread dump that jconnor added.

To Jason's point, I looked at error_log. During the same timeframe,
there's a message every second like:

[Mon Mar 14 10:07:49 2011] [notice] child pid 19584 exit signal
Segmentation fault (11)

So I think Jeff's suggestion might be right, that it's a symptom of WSGI
process coming up and down repeatedly. I'll keep poking around the logs
to see if I find anything else.

On 03/14/2011 12:07 PM, Jeff Ortel wrote:
> On 03/14/2011 10:46 AM, Jay Dobies wrote:
> On 03/14/2011 11:45 AM, Jeff Ortel wrote:
>>>> All,
>>>> Looks like gofer is pegging the CPU after it's been running for while.
>>>> I'm profiling now but in the meantime, restart the goferd service as a
>>>> workaround.
>>>> -jeff
> Not sure if it's related, but I just ran into the WSGI process crashing
> after the server was running for about an hour. I typed before I thought
> and I restarted httpd before dumping the threads. But my pulp.log was
> spammed with the following, so keep an eye out for that as well:
>        2011-03-14 10:09:08,062 [INFO][MainThread] connect() @
> broker.py:74 - connecting:
>        {localhost:5672}:
>        transport=TCP
>        host=localhost
>        port=5672
>        cacert=/etc/pki/qpid/ca/ca.crt
>        clientcert=/etc/pki/qpid/client/client.pem
>> I've seen this on one of the QE machines.  The Broker.connect() is
>> called on demand so something on the pulp server is constantly creating
>> and destroying a gofer message consumer or producer.  Something strange
>> though.  This message is only printed when a connection is created the
>> first time.  The created connection is cached and reused across all
>> sessions, so we should only see this message once for the life of the
>> processing (interpreter) running the pulp application in wsgi.  So, it's
>> unclear whether this is something bad happening in wsgi whereby the pulp
>> application processing is dying and being re-spawned.  Or, this is part
>> of the cause.  fwiw, the reconnect logic is handled in python-qpid so
>> this isn't an auto-reconnect gone crazy thing.. I don't think.  A stack
>> trace would really help.
Pulp-list mailing list
Pulp-list redhat com

> _______________________________________________
> Pulp-list mailing list
> Pulp-list redhat com
> https://www.redhat.com/mailman/listinfo/pulp-list

- -- 
Jay Dobies
RHCE# 805008743336126
Freenode: jdob
Version: GnuPG v2.0.14 (GNU/Linux)
Comment: Using GnuPG with Fedora - http://enigmail.mozdev.org/


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]