[Spacewalk-list] Tracebacks (still)

puck at i29.net puck at i29.net
Thu Oct 9 20:44:23 UTC 2008


I had a similar issue and found that rhn_register seemed to work while 
rhnreg_ks didn't. I'm not sure if that's the same issue you're having 
but it's worth a try. Also, I needed to change my Oracle memory settings 
in the apex admin. By default I think it decided to use 128M of RAM 
despite having 1G available. You might want to increase that to 256M or 
512M and see if that helps with the CPU issue.

Now the only issue I have left is that yum doesn't seem to be able to 
update my package list on the rhn server and gives me errors when it 
tries but at least the updates work. rhn_check gives me the same error 
and I haven't found a way to sync packages yet aside from re-registering 
with rhn_register. But that's another thread :)

Jem


Jason Frisvold wrote:
> Hi all,
>
> I'm still getting tracebacks with spacewalk 0.2.  I have spacewalk
> running on a dev server (celeron 2.4 ghz, 1 Gig ram, CentOS 5).  Could
> the low specs of the machine be the problem?  When I attempt to
> register a client :
>
> sudo /usr/sbin/rhnreg_ks --serverUrl=http://dev.example.com/XMLRPC
> --activationkey=1-centos-5.2 --force
>
> I can watch the cpu on the dev machine hit 100% for about 1-2 seconds,
> then drop to 98-99% idle.  I would expect just a slow response as
> opposed to a complete failure.  And everything was working quite well
> with 0.1...
>
> I have tried re-installing 0.2 by wiping out oracle and reinstalling
> it, but that doesn't seem to have helped.
>
> Any idea what's going on here?
>
> /var/log/up2date on the client shows this :
>
> Traceback (most recent call last):
>   File "/usr/sbin/rhnreg_ks", line 267, in ?
>     cli.run()
>   File "/usr/share/rhn/up2date_client/rhncli.py", line 65, in run
>     sys.exit(self.main() or 0)
>   File "/usr/sbin/rhnreg_ks", line 155, in main
>     rhnreg.sendPackages(systemId, packageList)
>   File "/usr/share/rhn/up2date_client/rhnreg.py", line 644, in sendPackages
>     s.registration.add_packages(systemId, packageList)
>   File "/usr/share/rhn/up2date_client/rhnserver.py", line 50, in __call__
>     return rpcServer.doCall(method, *args, **kwargs)
>   File "/usr/share/rhn/up2date_client/rpcServer.py", line 263, in doCall
>     raise up2dateErrors.CommunicationError(e.errmsg)
> up2date_client.up2dateErrors.CommunicationError: Error communicating
> with server. The message was:
> Internal Server Error
>
> /var/log/tomcat5/catalina.out on the server shows this :
>
> Oct 9, 2008 4:23:34 PM
> com.mchange.v2.async.ThreadPoolAsynchronousRunner$DeadlockDetector run
> WARNING: com.mchange.v2.async.ThreadPoolAsynchronousRunner$DeadlockDetector at cd32e5
> -- APPARENT DEADLOCK!!! Creating emergency threads for unassigned
> pending tasks!
> Oct 9, 2008 4:23:34 PM
> com.mchange.v2.async.ThreadPoolAsynchronousRunner$DeadlockDetector run
> WARNING: com.mchange.v2.async.ThreadPoolAsynchronousRunner$DeadlockDetector at cd32e5
> -- APPARENT DEADLOCK!!! Complete Status: [num_managed_threads: 3,
> num_active: 3; activeTasks:
> com.mchange.v2.resourcepool.BasicResourcePool$AsyncTestIdleResourceTask at 460d4
> (com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread-#1),
> com.mchange.v2.resourcepool.BasicResourcePool$AsyncTestIdleResourceTask at 170fe4e
> (com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread-#0),
> com.mchange.v2.resourcepool.BasicResourcePool$AsyncTestIdleResourceTask at 147c302
> (com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread-#2);
> pendingTasks: com.mchange.v2.resourcepool.BasicResourcePool$AsyncTestIdleResourceTask at 53d29b,
> com.mchange.v2.resourcepool.BasicResourcePool$AsyncTestIdleResourceTask at 127d4d9]
>
> Thanks!
>
>   




More information about the Spacewalk-list mailing list