[Spacewalk-list] Taskomatic runs indefinitely without ever generating repodata

Florence Savary florence.savary.fs at gmail.com
Fri Jul 20 09:40:00 UTC 2018


 Hello all,

For your information, it seems we have found the solution here. Gerald,
could you check the table called rhnRepoRegenQueue in your database ? We
found out that something kept filling this table with duplicate rows. We
had 45000000 rows in this table, and it was endlessly growing up. We think
that something went wrong one day at some point during the repodata
regeneration, and then the systems registered on the Spacewalk server were
not able to access the repodata files, therefore adding one additionall row
in this table each time a system tried to access the repodata, thus filling
endlessly the table (we also have a rhn-check process running every five
minutes on the systems that probably didn't help).

If that is also your case Gerald, you should be able to delete all rows
first to clean the table a bit, and right after that, modifiy every rows to
force the regeneration with the sql command *update rhnRepoRegenQueue set
force = 'Y';*
Like us, you may have to launch this command several times, to update the
new rows as they appear, but by forcing the ones already existing, the
process of regeneration should run normally, enabling the systems to access
the repodata, and making them stop filling the table.

Bit by bit, we managed to make the repodata bunch complete, and eventually
the table rhnRepoRegenQueue was empty. That being said, we still don't know
the root cause of this behavior, and what led us to such a huge table in
the db. I will monitor taskomatic carefully during the next days, and see
what happens ... We may face the same problem sooner or later, and may have
to clean the table and force the regeneration again...

Not sure if I made myself clear.

Thanks to all for your help anyways.

Regards,
Florence



2018-07-19 13:17 GMT+02:00 Brian Long <briandlong at gmail.com>:

> Gerald,
> Months ago I ran into a similar issue on our SW 2.6 server and I found
> that, because of a full filesystem issue, our database had become corrupt.
> When SW 2.7 came out, I set up a fresh Spacewalk VM, fresh channels, etc.
> and made sure everything was working.  A coworker and I migrated all our
> clients to the SW 2.7 server and everything has been good so far.  SW 2.8
> was released right after we performed the migration, but I'm wary of
> performing an upgrade with all the emails I've seen about issues with SW
> 2.8.  When the time comes, I'll probably shutdown all spacewalk and
> postgres services, snapshot my VM, perform the upgrade and do a bunch of
> testing with the non-prod clients before I remove the snapshot.  :)
>
> /Brian/
>
>
> On Wed, Jul 18, 2018 at 9:19 AM, Gerald Vogt <vogt at spamcop.net> wrote:
>
>> On 14.07.18 16:07, Paul Dias - BCX wrote:
>>
>>> Hi,
>>>
>>>
>>> you can put tomcat in debug mode and run tomcat if I remember without
>>> calling the service which forks it into the background, but actually
>>> display it in the console, that way you can see what is happening when
>>>
>>
>> I don't think taskomatic is a tomcat. I can run taskomatic on the
>> console. But that only prints all the log lines on stdout and nothing
>> further.
>>
>> you run a job, also in /usr/share/rhn/config-defaults/rhn_taskomatic,
>>> there are options there that you can increase the level of logging from
>>> what I can see. I cant remember clearly but there was a post that I was
>>> going through a couple of weeks ago about troubleshooting I can't remember
>>> the address tbh!
>>>
>>
>> Even with DEBUG log level it doesn't show anything beyond that it starts
>> the channel-repodata task but never finishes... I have no idea what it is
>> actually doing there...
>>
>> I am stumped. Currently, we are unable to update our spacewalk server...
>>
>> Thanks,
>>
>> Gerald
>>
>>
>>
>>>
>>> Regards,
>>> *Paul Dias*
>>> 6^th  Floor, 8 Boundary Road
>>> Newlands
>>> Cape Town
>>> 7700
>>> T: +27 (0) 21 681 3149
>>>
>>> *Meet your future today.*
>>> **
>>> BCX
>>>
>>>
>>> ------------------------------------------------------------------------
>>> *From:* Gerald Vogt <vogt at spamcop.net>
>>> *Sent:* Friday, 13 July 2018 8:56 AM
>>> *To:* spacewalk-list at redhat.com
>>> *Subject:* Re: [Spacewalk-list] Taskomatic runs indefinitely without
>>> ever generating repodata
>>>
>>> Anyone any idea how to troubleshoot this? Any debug logging we could
>>> enable to find out what's really going on and where it's hanging?
>>>
>>> Thanks,
>>>
>>> Gerald
>>>
>>> On 06.07.18 09:20, Gerald Vogt wrote:
>>>
>>>> On 05.07.18 21:30, Matt Moldvan wrote:
>>>>
>>>>> Is there anything interesting in /var/log/rhn/tasko/sat/channel-repodata-bunch?
>>>>> Do you have any hung
>>>>>
>>>>
>>>> There is currently only a single file with this content:
>>>>
>>>> spacewalk:channel-repodata-bunch(996)# ls -l
>>>> total 4
>>>> -rw-r--r--. 1 root root 130 Jul  2 08:13 channel-repodata_15408487_out
>>>> spacewalk:channel-repodata-bunch(997)# cat
>>>> channel-repodata_15408487_out
>>>> 2018-07-02 08:13:10,793 [DefaultQuartzScheduler_Worker-8] INFO
>>>> com.redhat.rhn.taskomatic.task.ChannelRepodata  - In the queue: 4
>>>>
>>>> reposync processes?  Any lingering Postgres locks that might be an
>>>>> issue?
>>>>>
>>>>
>>>> No reposync processes. All progres processes say "idle", so I guess
>>>> there are no locks. Or how do I check for lingering locks?
>>>>
>>>> It's odd that the run would only take 1 second, unless something is
>>>>> wrong with the database or it's data...
>>>>>
>>>>> What do you see from a spacewalk-sql command like below?
>>>>>
>>>>
>>>> I see all the channels:
>>>>
>>>>                  label               |
>>>> name                      |           modified            |
>>>> last_synced
>>>> -----------------------------------+------------------------
>>>> ------------------------+-------------------------------+---
>>>> -------------------------
>>>>
>>>>    icinga-epel7-x86_64               | ICINGA stable release for
>>>> epel-7 (x86_64)      | 2016-02-15 10:07:59.822942+01 | 2018-07-06
>>>> 02:30:55.482+02
>>>>    epel7-centos7-x86_64              | EPEL 7 for CentOS 7
>>>> (x86_64)                   | 2014-07-21 08:16:26.367135+02 | 2018-07-06
>>>> 04:01:52.148+02
>>>>    centos6-x86_64-extras             | CentOS 6 Extras
>>>> (x86_64)                       | 2012-08-23 06:46:05.145629+02 | 2018-06-21
>>>> 10:25:26.104+02
>>>>    grafana-epe7-x86_64               | Grafana stable release for
>>>> epel-7 (x86_64)     | 2016-05-06 08:29:49.308149+02 | 2018-06-21
>>>> 04:58:15.022+02
>>>>    spacewalk26-client-centos6-x86_64 | Spacewalk Client 2.6 for CentOS
>>>> 6 (x86_64)     | 2017-04-25 13:44:49.266738+02 | 2018-06-21 10:41:07.369+02
>>>>    globus-el6-x86_64                 | Globus Toolkit 6
>>>> (el6)                         | 2016-05-13 15:23:31.807011+02 | 2018-07-06
>>>> 03:34:49.95+02
>>>>    internet2                         | perfSONAR RPM
>>>> Repository                       | 2017-06-27 06:56:33.675378+02 |
>>>> 2018-06-22 10:24:41.702+02
>>>>    postgresql94-centos6-x86_64       | PostgreSQL 9.4 for CentOS 6
>>>> (x86_64)           | 2015-01-28 14:09:41.856451+01 | 2018-06-21
>>>> 10:42:01.413+02
>>>>    spacewalk26-server-centos6-x86_64 | Spacewalk Server 2.6 for CentOS
>>>> 6 (x86_64)     | 2017-04-25 13:39:38.250769+02 | 2018-06-21 10:36:17.46+02
>>>>    centos7-x86_64-fasttrack          | CentOS 7 FastTrack
>>>> (x86_64)                    | 2014-07-21 08:16:26.017642+02 | 2018-06-21
>>>> 10:26:29.571+02
>>>>    spacewalk26-client-centos7-x86_64 | Spacewalk Client 2.6 for CentOS
>>>> 7 (x86_64)     | 2017-04-25 13:46:00.107344+02 | 2018-06-22 10:22:28.484+02
>>>>    centos7-x86_64-centosplus         | CentOS 7 Plus
>>>> (x86_64)                         | 2014-07-21 08:16:25.467309+02 |
>>>> 2018-06-21 10:25:19.884+02
>>>>    centos6-x86_64-centosplus         | CentOS 6 Plus
>>>> (x86_64)                         | 2012-08-23 07:18:00.349338+02 |
>>>> 2018-06-21 10:36:04.08+02
>>>>    docker-ce-centos7-x86_64          | Docker CE Stable for CentOS 7
>>>> (x86_64)         | 2017-09-28 12:52:45.858354+02 | 2018-07-06
>>>> 04:30:05.442+02
>>>>    postgresql10-centos7-x86_64       | PostgreSQL 10 for CentOS 7
>>>> (x86_64)            | 2018-02-12 14:48:14.617235+01 | 2018-02-12
>>>> 15:06:16.464+01
>>>>    bareos162-centos7-x86_64          | Bareos 16.2 for CentOS 7
>>>> (x86_64)              | 2017-09-26 14:37:16.533773+02 | 2018-06-21
>>>> 04:59:21.954+02
>>>>    docker-ce-edge-centos7-x86_64     | Docker CE Edge for CentOS 7
>>>> (x86_64)           | 2017-12-29 09:58:14.581069+01 | 2018-06-21
>>>> 04:59:39.796+02
>>>>    beegfs6-centos7-x86_64            | BeeGFS 6 for CentOS 7
>>>> (x86_64)                 | 2018-03-19 14:08:08.389588+01 | 2018-06-21
>>>> 04:59:43.132+02
>>>>    icinga-epel6-x86_64               | ICINGA stable release for
>>>> epel-6 (x86_64)      | 2018-01-15 15:41:31.138875+01 | 2018-07-06
>>>> 02:30:28.142+02
>>>>    openstack-pike-centos7            | OpenStack Pike for CentOS
>>>> 7                    | 2017-10-05 09:10:22.575224+02 | 2018-06-21
>>>> 05:36:35.43+02
>>>>    globus-el7-x86_64                 | Globus Toolkit 6
>>>> (el7)                         | 2017-09-28 13:00:07.32028+02  | 2018-07-06
>>>> 03:31:22.806+02
>>>>    postgresql10-centos6-x86_64       | PostgreSQL 10 for CentOS 6
>>>> (x86_64)            | 2018-02-12 14:48:55.970013+01 | 2018-07-06
>>>> 04:02:04.03+02
>>>>    ceph-jewel-centos7                | CentOS 7 Ceph Jewel
>>>> (x86_64)                   | 2018-02-12 12:15:28.8976+01   | 2018-07-06
>>>> 05:30:07.085+02
>>>>    spacewalk28-server-centos6-x86_64 | Spacewalk Server 2.8 for CentOS
>>>> 6 (x86_64)     | 2018-06-22 18:05:55.190988+02 | 2018-07-06 06:18:15.016+02
>>>>    spacewalk28-client-centos7-x86_64 | Spacewalk Client 2.8 for CentOS
>>>> 7 (x86_64)     | 2018-06-22 18:05:55.575963+02 | 2018-07-06 06:18:22.41+02
>>>>    puppet5-el7-x86_64                | Puppet 5 for EL 7
>>>> (x86_64)                     | 2018-03-28 14:20:52.254978+02 | 2018-06-21
>>>> 06:01:31.357+02
>>>>    centos7-qemu-ev                   | CentOS 7 QEMU EV
>>>> (x86_64)                      | 2018-02-12 12:15:06.116673+01 | 2018-07-06
>>>> 05:30:12.078+02
>>>>    bareos172-centos7-x86_64          | Bareos 17.2 for CentOS 7
>>>> (x86_64)              | 2018-05-08 14:18:56.708206+02 | 2018-06-22
>>>> 10:24:48.431+02
>>>>    openstack-queens-centos7          | OpenStack Queens for CentOS
>>>> 7                  | 2018-03-28 13:08:27.607498+02 | 2018-07-06
>>>> 05:30:44.123+02
>>>>    elrepo-centos7                    | ELRepo for CentOS
>>>> 7                            | 2017-09-18 12:03:42.302442+02 | 2018-06-21
>>>> 05:01:30.303+02
>>>>    spacewalk28-client-centos6-x86_64 | Spacewalk Client 2.8 for CentOS
>>>> 6 (x86_64)     | 2018-06-22 18:05:53.475193+02 | 2018-07-06 06:18:07.158+02
>>>>    centos7-x86_64-extras             | CentOS 7 Extras
>>>> (x86_64)                       | 2014-07-21 08:16:25.841121+02 | 2018-06-21
>>>> 10:26:27.879+02
>>>>    internet2-web100_kernel           | perfSONAR Web100 Kernel RPM
>>>> Repository         | 2017-06-27 06:57:03.825602+02 | 2018-06-21
>>>> 10:24:50.96+02
>>>>    centos6-x86_64-updates            | CentOS 6 Updates
>>>> (x86_64)                      | 2012-08-23 06:46:05.264195+02 | 2018-06-21
>>>> 10:34:24.866+02
>>>>    centos7-x86_64-updates            | CentOS 7 Updates
>>>> (x86_64)                      | 2014-07-21 08:16:26.196397+02 | 2018-07-02
>>>> 09:48:02.273+02
>>>>    centos6-x86_64-fasttrack          | CentOS 6 FastTrack
>>>> (x86_64)                    | 2012-08-23 06:46:05.205228+02 | 2018-06-22
>>>> 10:24:43.51+02
>>>>    postgresql92-centos6-x86_64       | PostgreSQL 9.2 for CentOS 6
>>>> (x86_64)           | 2012-09-12 08:15:27.194188+02 | 2018-07-06
>>>> 03:47:12.311+02
>>>>    epel6-centos6-x86_64              | EPEL 6 for CentOS 6
>>>> (x86_64)                   | 2012-08-23 06:46:30.597753+02 | 2018-07-06
>>>> 03:55:48.834+02
>>>>    jpackage5.0-generic               | JPackage 5.0 for
>>>> generic                       | 2014-07-02 10:32:24.985979+02 | 2018-07-06
>>>> 03:46:46.084+02
>>>>    hp-spp-rhel-7                     | HP Software Delivery Repository
>>>> for SPP RHEL 7 | 2015-04-16 14:18:33.041249+02 | 2018-07-06 05:31:01.633+02
>>>>    owncloud-centos7-noarch           | ownCloud for CentOS
>>>> 7                          | 2015-01-28 13:53:41.415573+01 | 2018-06-21
>>>> 05:36:40.901+02
>>>>    centos7-x86_64-scl                | CentOS 7 SCL
>>>> (x86_64)                          | 2016-04-15 11:26:29.042925+02 |
>>>> 2018-06-21 05:14:49.359+02
>>>>    postgresql96-centos6-x86_64       | PostgreSQL 9.6 for CentOS 6
>>>> (x86_64)           | 2017-02-09 15:31:54.632728+01 | 2018-06-21
>>>> 05:00:39.353+02
>>>>    postgresql96-centos7-x86_64       | PostgreSQL 9.6 for CentOS 7
>>>> (x86_64)           | 2017-02-09 15:35:13.136001+01 | 2018-07-06
>>>> 04:30:19.645+02
>>>>    centos6-x86_64                    | CentOS 6
>>>> (x86_64)                              | 2012-08-23 06:46:04.610089+02
>>>> | 2018-07-05 22:03:30.089+02
>>>>    centos7-x86_64                    | CentOS 7
>>>> (x86_64)                              | 2014-07-21 08:16:24.172395+02
>>>> | 2018-07-05 22:08:45.242+02
>>>>
>>>> -Gerald
>>>>
>>>>
>>>>> echo 'select label,name,modified,last_synced from rhnchannel' | sudo
>>>>> spacewalk-sql -i
>>>>>
>>>>> label | name | modified|last_synced
>>>>>
>>>>> ----------------------------------+-------------------------
>>>>> ---------+-------------------------------+----------------------------
>>>>>
>>>>>
>>>>> ovirt-x86_64-stable-6-nonprod| ovirt-x86_64-stable-6-nonprod|
>>>>> 2015-09-14 13:46:44.147134-05 |
>>>>>
>>>>> extras7-x86_64-nonprod | extras7-x86_64-nonprod | 2017-11-06
>>>>> 10:26:30.011283-06 |
>>>>>
>>>>> centos7-x86_64-all | centos7-x86_64-all | 2015-11-11
>>>>> 08:50:58.831234-06 | 2018-07-05 11:01:08.857-05
>>>>>
>>>>> perl-5.16.x-all| perl-5.16.x-all| 2015-09-11 13:25:15.002198-05 |
>>>>> 2015-09-11 13:29:21.361-05
>>>>>
>>>>> ovirt-x86_64-stable-6| ovirt-x86_64-stable-6| 2015-09-14
>>>>> 13:30:55.172-05|
>>>>>
>>>>> ovirt-x86_64-stable-6-prod | ovirt-x86_64-stable-6-prod | 2015-09-14
>>>>> 13:48:06.637063-05 |
>>>>>
>>>>> other6-x86_64-all| other6-x86_64-all| 2015-07-28 09:20:38.156104-05 |
>>>>>
>>>>> epel5-x86_64-all | epel5-x86_64-all | 2016-10-04 18:20:44.846312-05 |
>>>>> 2017-04-17 12:57:36.859-05
>>>>>
>>>>> passenger6-x86_64-prod | passenger6-x86_64-prod | 2016-04-22
>>>>> 14:35:45.395518-05 |
>>>>>
>>>>> perl-5.16.x-nonprod| perl-5.16.x-nonprod| 2015-09-11
>>>>> 13:27:32.261063-05 |
>>>>>
>>>>> perl-5.16.x-prod | perl-5.16.x-prod | 2015-09-11 13:26:40.584715-05 |
>>>>> 2015-09-11 13:29:38.537-05
>>>>>
>>>>> other6-x86_64-nonprod| other6-x86_64-nonprod| 2015-07-23
>>>>> 15:00:03.733479-05 |
>>>>>
>>>>> other6-x86_64-prod | other6-x86_64-prod | 2015-07-21
>>>>> 15:10:48.719528-05 |
>>>>>
>>>>> epel5-x86_64-prod| epel5-x86_64-prod| 2016-10-04 18:25:38.655383-05 |
>>>>>
>>>>> passenger6-x86_64-all| passenger6-x86_64-all| 2016-04-20
>>>>> 11:37:19.002493-05 | 2016-04-20 11:58:42.312-05
>>>>>
>>>>> docker7-x86_64-prod| docker7-x86_64-prod| 2017-08-03
>>>>> 11:42:08.474496-05 |
>>>>>
>>>>> centos5-x86_64-nonprod | centos5-x86_64-nonprod | 2015-06-22
>>>>> 16:16:17.372799-05 |
>>>>>
>>>>> other7-x86_64-nonprod| other7-x86_64-nonprod| 2016-07-14
>>>>> 13:03:10.320136-05 |
>>>>>
>>>>> mongo3.2-centos6-x86_64-all| mongo3.2-centos6-x86_64-all| 2016-08-22
>>>>> 12:21:40.722182-05 | 2018-07-01 12:27:03.019-05
>>>>>
>>>>> centos5-x86_64-prod| centos5-x86_64-prod| 2015-06-22
>>>>> 16:20:41.474486-05 |
>>>>>
>>>>> passenger6-x86_64-nonprod| passenger6-x86_64-nonprod| 2016-04-20
>>>>> 12:29:24.677227-05 |
>>>>>
>>>>> other7-x86_64-prod | other7-x86_64-prod | 2016-07-14
>>>>> 13:03:47.284295-05 |
>>>>>
>>>>> cloudera5.7-x86_64-nonprod | cloudera5.7-x86_64-nonprod | 2016-05-09
>>>>> 12:10:16.496626-05 | 2016-06-20 13:11:20.62-05
>>>>>
>>>>> epel5-x86_64-nonprod | epel5-x86_64-nonprod | 2016-10-04
>>>>> 18:25:09.844486-05 |
>>>>>
>>>>> epel6-x86_64-prod| epel6-x86_64-prod| 2016-03-18 11:52:45.9199-05 |
>>>>> 2016-08-23 05:07:37.967-05
>>>>>
>>>>> spacewalk6-client-all| spacewalk6-client-all| 2017-05-02
>>>>> 20:53:38.867018-05 | 2018-07-01 22:02:11.386-05
>>>>>
>>>>> docker7-x86_64-nonprod | docker7-x86_64-nonprod | 2017-04-07
>>>>> 15:13:44.158973-05 |
>>>>>
>>>>> mongo3.2-centos6-x86_64-nonprod| mongo3.2-centos6-x86_64-nonprod|
>>>>> 2016-08-22 12:34:18.095059-05 |
>>>>>
>>>>> mongo3.2-centos6-x86_64-prod | mongo3.2-centos6-x86_64-prod |
>>>>> 2016-08-22 12:42:19.161165-05 |
>>>>>
>>>>> local6-x86_64-all| local6-x86_64-all| 2015-09-30 08:55:37.657412-05 |
>>>>> 2016-04-19 07:00:23.632-05
>>>>>
>>>>> centos5-x86_64-all | centos5-x86_64-all | 2015-06-22
>>>>> 15:20:22.085465-05 | 2017-04-17 13:09:39.635-05
>>>>>
>>>>> spacewalk5-client-nonprod| spacewalk5-client-nonprod| 2017-05-02
>>>>> 20:53:20.430795-05 |
>>>>>
>>>>> spacewalk5-client-prod | spacewalk5-client-prod | 2017-05-02
>>>>> 20:53:28.980968-05 |
>>>>>
>>>>> spacewalk5-client-all| spacewalk5-client-all| 2017-05-02
>>>>> 20:53:08.276664-05 | 2018-07-05 10:10:11.665-05
>>>>>
>>>>> spacewalk7-client-prod | spacewalk7-client-prod | 2017-05-02
>>>>> 20:54:32.321635-05 | 2018-07-05 11:01:14.499-05
>>>>>
>>>>> epel6-x86_64-nonprod | epel6-x86_64-nonprod | 2016-03-18
>>>>> 11:52:14.915108-05 | 2018-07-05 10:10:08.774-05
>>>>>
>>>>> centos7-x86_64-prod| centos7-x86_64-prod| 2015-11-11 09:02:06.69758-06|
>>>>>
>>>>> puppetlabs6-x86_64-prod| puppetlabs6-x86_64-prod| 2016-04-22
>>>>> 13:46:22.233841-05 | 2018-07-01 13:30:47.635-05
>>>>>
>>>>> puppetlabs5-x86_64-nonprod | puppetlabs5-x86_64-nonprod | 2018-03-26
>>>>> 15:21:59.007749-05 | 2018-07-01 13:00:03.401-05
>>>>>
>>>>> puppetlabs5-x86_64-prod| puppetlabs5-x86_64-prod| 2018-03-26
>>>>> 15:24:23.86552-05| 2018-07-01 13:30:39.025-05
>>>>>
>>>>> puppetlabs5-x86_64-all | puppetlabs5-x86_64-all | 2018-03-26
>>>>> 15:19:04.647981-05 | 2018-07-01 13:31:25.065-05
>>>>>
>>>>> other5-x86_64-all| other5-x86_64-all| 2015-08-10 14:16:01.092867-05 |
>>>>>
>>>>> other5-x86_64-nonprod| other5-x86_64-nonprod| 2015-08-10
>>>>> 14:18:05.114541-05 |
>>>>>
>>>>> other5-x86_64-prod | other5-x86_64-prod | 2015-08-10
>>>>> 14:19:03.728982-05 |
>>>>>
>>>>> centos6-x86_64-nonprod | centos6-x86_64-nonprod | 2015-06-22
>>>>> 16:24:07.137207-05 |
>>>>>
>>>>> centos6-x86_64-prod| centos6-x86_64-prod| 2015-06-22
>>>>> 16:28:51.324002-05 |
>>>>>
>>>>> extras7-x86_64-all | extras7-x86_64-all | 2017-08-16 09:13:26.8122-05
>>>>> | 2018-07-05 10:05:10.626-05
>>>>>
>>>>> centos6-x86_64-gitlab-ce-nonprod | centos6-x86_64-gitlab-ce-nonprod |
>>>>> 2017-04-17 11:43:36.609036-05 | 2018-07-05 10:04:57.277-05
>>>>>
>>>>> spacewalk7-server-all| spacewalk7-server-all| 2017-03-28
>>>>> 15:22:31.851414-05 | 2018-07-05 11:11:31.564-05
>>>>>
>>>>> local5-x86_64-all| local5-x86_64-all| 2016-02-24 12:19:36.791459-06 |
>>>>>
>>>>> local5-x86_64-nonprod| local5-x86_64-nonprod| 2016-02-24
>>>>> 12:20:19.404008-06 |
>>>>>
>>>>> local5-x86_64-prod | local5-x86_64-prod | 2016-02-24
>>>>> 12:20:45.098532-06 |
>>>>>
>>>>> local6-x86_64-nonprod| local6-x86_64-nonprod| 2016-08-22
>>>>> 20:49:56.7376-05 |
>>>>>
>>>>> local7-x86_64-all| local7-x86_64-all| 2016-07-14 13:00:32.511851-05 |
>>>>>
>>>>> local7-x86_64-nonprod| local7-x86_64-nonprod| 2016-07-14
>>>>> 13:02:06.932169-05 |
>>>>>
>>>>> local7-x86_64-prod | local7-x86_64-prod | 2016-07-14
>>>>> 13:02:38.496912-05 |
>>>>>
>>>>> puppetlabs6-x86_64-all | puppetlabs6-x86_64-all | 2016-04-20
>>>>> 08:27:56.026914-05 | 2018-07-01 13:30:36.771-05
>>>>>
>>>>> spacewalk7-client-nonprod| spacewalk7-client-nonprod| 2017-05-02
>>>>> 20:54:22.659512-05 | 2018-07-05 11:10:25.009-05
>>>>>
>>>>> docker7-x86_64-all | docker7-x86_64-all | 2017-03-22
>>>>> 12:50:15.332561-05 | 2018-07-05 13:00:02.988-05
>>>>>
>>>>> spacewalk7-client-all| spacewalk7-client-all| 2017-05-02
>>>>> 20:54:13.5076-05 | 2018-07-05 10:04:59.748-05
>>>>>
>>>>> local6-x86_64-prod | local6-x86_64-prod | 2015-09-30
>>>>> 08:59:12.679727-05 |
>>>>>
>>>>> centos6-x86_64-gitlab-ee-nonprod | centos6-x86_64-gitlab-ee-nonprod |
>>>>> 2016-04-14 11:39:01.432444-05 | 2018-07-05 11:12:20.525-05
>>>>>
>>>>> mysqltools6-x86_64-all | mysqltools6-x86_64-all | 2016-03-17
>>>>> 12:41:37.44854-05| 2018-07-05 12:00:02.319-05
>>>>>
>>>>> mysqltools6-x86_64-nonprod | mysqltools6-x86_64-nonprod | 2016-03-17
>>>>> 12:58:35.036373-05 |
>>>>>
>>>>> mysqltools6-x86_64-prod| mysqltools6-x86_64-prod| 2016-03-17
>>>>> 12:59:10.969162-05 |
>>>>>
>>>>> spacewalk7-server-nonprod| spacewalk7-server-nonprod| 2017-03-28
>>>>> 15:23:02.210349-05 | 2018-07-05 11:12:47.471-05
>>>>>
>>>>> spacewalk7-server-prod | spacewalk7-server-prod | 2017-03-28
>>>>> 15:23:29.309042-05 | 2017-05-02 20:56:45.247-05
>>>>>
>>>>> epel7-x86_64-prod| epel7-x86_64-prod| 2016-03-22 09:48:38.060213-05 |
>>>>> 2018-07-05 09:57:25.861-05
>>>>>
>>>>> puppetlabs6-x86_64-nonprod | puppetlabs6-x86_64-nonprod | 2016-04-20
>>>>> 12:28:55.337125-05 | 2018-07-01 13:30:43.362-05
>>>>>
>>>>> newrelic-noarch-nover| newrelic-noarch-nover| 2016-10-13
>>>>> 13:54:38.621333-05 | 2016-10-13 14:09:41.778-05
>>>>>
>>>>> other7-x86_64-all| other7-x86_64-all| 2016-07-14 13:01:25.848215-05 |
>>>>> 2018-07-05 14:00:03.714-05
>>>>>
>>>>> spacewalk6-client-nonprod| spacewalk6-client-nonprod| 2017-05-02
>>>>> 20:53:50.507298-05 |
>>>>>
>>>>> spacewalk6-client-prod | spacewalk6-client-prod | 2017-05-02
>>>>> 20:54:00.685324-05 |
>>>>>
>>>>> spacewalk6-server-all| spacewalk6-server-all| 2018-06-22
>>>>> 23:11:30.637054-05 | 2018-07-05 11:01:11.543-05
>>>>>
>>>>> puppetlabs7-x86_64-prod| puppetlabs7-x86_64-prod| 2016-07-14
>>>>> 13:29:04.67033-05| 2018-07-01 13:31:29.425-05
>>>>>
>>>>> spacewalk6-server-nonprod| spacewalk6-server-nonprod| 2018-06-22
>>>>> 23:17:20.660409-05 |
>>>>>
>>>>> spacewalk6-server-prod | spacewalk6-server-prod | 2018-06-22
>>>>> 23:18:02.738869-05 |
>>>>>
>>>>> puppetlabs7-x86_64-nonprod | puppetlabs7-x86_64-nonprod | 2016-07-14
>>>>> 13:28:34.475051-05 | 2018-07-01 13:16:25.948-05
>>>>>
>>>>> epel6-x86_64-all | epel6-x86_64-all | 2016-03-18 11:50:17.587171-05 |
>>>>> 2018-07-05 11:07:42.644-05
>>>>>
>>>>> centos6-x86_64-gitlab-ee | centos6-x86_64-gitlab-ee | 2015-12-24
>>>>> 13:21:10.493684-06 | 2018-07-05 11:08:30.039-05
>>>>>
>>>>> puppetlabs7-x86_64-all | puppetlabs7-x86_64-all | 2016-07-14
>>>>> 12:54:59.388232-05 | 2018-07-01 13:32:02.745-05
>>>>>
>>>>> epel7-x86_64-nonprod | epel7-x86_64-nonprod | 2016-03-22
>>>>> 09:47:34.668867-05 | 2017-04-21 11:08:24.573-05
>>>>>
>>>>> centos6-x86_64-all | centos6-x86_64-all | 2015-06-22
>>>>> 15:19:13.053429-05 | 2018-07-02 01:12:57.768-05
>>>>>
>>>>> epel7-x86_64-all | epel7-x86_64-all | 2016-03-22 09:44:48.748142-05 |
>>>>> 2018-07-05 09:11:28.553-05
>>>>>
>>>>> centos7-x86_64-nonprod | centos7-x86_64-nonprod | 2015-10-21
>>>>> 22:02:28.107902-05 |
>>>>>
>>>>> (85 rows)
>>>>>
>>>>>
>>>>> On Thu, Jul 5, 2018 at 11:48 AM Gerald Vogt <vogt at spamcop.net <mailto:
>>>>> vogt at spamcop.net>> wrote:
>>>>>
>>>>>     On 05.07.18 16:05, Matt Moldvan wrote:
>>>>>      > How is the server utilization with respect to disk I/O
>>>>> (something
>>>>>     like
>>>>>      > iotop or htop might help here)?  Maybe there is something else
>>>>>     blocking
>>>>>
>>>>>     My server is basically idle. 99% idle, little disk i/o. It doesn't
>>>>>     do anything really.
>>>>>
>>>>>      > and the server doesn't have enough resources to complete.  Have
>>>>> you
>>>>>      > tried running an strace against the running process?
>>>>>
>>>>>     If it doesn't have enough resources shouldn't there be an
>>>>> exception?
>>>>>
>>>>>     For me, it looks more like something doesn't make it into the
>>>>>     database and thus into the persistent state. For instance, I now
>>>>>     have the repodata task at "RUNNING" for three days:
>>>>>
>>>>>     Channel Repodata:       2018-07-02 08:13:10 CEST        RUNNING
>>>>>
>>>>>     The log file shows this regarding repodata:
>>>>>
>>>>>      > # fgrep -i repodata rhn_taskomatic_daemon.log
>>>>>      > INFO   | jvm 1    | 2018/07/02 08:13:10 | 2018-07-02
>>>>> 08:13:10,584
>>>>>     [Thread-12] INFO  com.redhat.rhn.taskomatic.TaskoQuartzHelper -
>>>>> Job
>>>>>     single-channel-repodata-bunch-0 scheduled succesfully.
>>>>>      > INFO   | jvm 1    | 2018/07/02 08:13:10 | 2018-07-02
>>>>> 08:13:10,636
>>>>>     [DefaultQuartzScheduler_Worker-8] INFO
>>>>>  com.redhat.rhn.taskomatic.TaskoJob -
>>>>>     single-channel-repodata-bunch-0: bunch channel-repodata-bunch
>>>>> STARTED
>>>>>      > INFO   | jvm 1    | 2018/07/02 08:13:10 | 2018-07-02
>>>>> 08:13:10,651
>>>>>     [DefaultQuartzScheduler_Worker-8] DEBUG
>>>>>     com.redhat.rhn.taskomatic.TaskoJob -
>>>>>     single-channel-repodata-bunch-0: task channel-repodata started
>>>>>      > INFO   | jvm 1    | 2018/07/02 08:13:10 | 2018-07-02
>>>>> 08:13:10,793
>>>>>     [DefaultQuartzScheduler_Worker-8] INFO
>>>>>  com.redhat.rhn.taskomatic.task.ChannelRepodata - In the queue: 4
>>>>>      > INFO   | jvm 1    | 2018/07/02 08:13:11 | 2018-07-02
>>>>> 08:13:11,102
>>>>>     [DefaultQuartzScheduler_Worker-8] DEBUG
>>>>>     com.redhat.rhn.taskomatic.TaskoJob - channel-repodata
>>>>>     (single-channel-repodata-bunch-0) ... running
>>>>>      > INFO   | jvm 1    | 2018/07/02 08:13:11 | 2018-07-02
>>>>> 08:13:11,103
>>>>>     [DefaultQuartzScheduler_Worker-8] INFO
>>>>>  com.redhat.rhn.taskomatic.TaskoJob -
>>>>>     single-channel-repodata-bunch-0: bunch channel-repodata-bunch
>>>>> FINISHED
>>>>>
>>>>>     So according to the logs the repodata bunch has finished. According
>>>>>     to the web interface it has not. Nothing has been updated in
>>>>>     /var/cache/rhn/repodata/ either. In addition, those four channels
>>>>>     which were still updated haven't been updated either now.
>>>>>
>>>>>     Thanks,
>>>>>
>>>>>     Gerald
>>>>>
>>>>>
>>>>>
>>>>>      >
>>>>>      > I also had an (well, many) issue(s) with our Spacewalk server
>>>>> before
>>>>>      > disabling snapshots in /etc/rhn/rhn.conf.  I also increased the
>>>>>     number
>>>>>      > of workers and max repodata work items:
>>>>>      >
>>>>>      > # system snapshots enabled
>>>>>      > enable_snapshots = 0
>>>>>      > ...
>>>>>      > taskomatic.maxmemory=6144
>>>>>      > taskomatic.errata_cache_max_work_items = 500
>>>>>      > taskomatic.channel_repodata_max_work_items = 50
>>>>>      > taskomatic.channel_repodata_workers = 5
>>>>>      >
>>>>>      >
>>>>>      >
>>>>>      > On Thu, Jul 5, 2018 at 4:38 AM Florence Savary
>>>>>      > <florence.savary.fs at gmail.com
>>>>>     <mailto:florence.savary.fs at gmail.com>
>>>>>     <mailto:florence.savary.fs at gmail.com
>>>>>     <mailto:florence.savary.fs at gmail.com>>> wrote:
>>>>>      >
>>>>>      >     Hello,
>>>>>      >
>>>>>      >     Thanks for sharing your configuration files. They differ
>>>>> very
>>>>>     little
>>>>>      >     from mine. I just changed the number of workers in rhn.conf,
>>>>>     but it
>>>>>      >     didn't change anything.
>>>>>      >
>>>>>      >     I deleted all the channels clones not used by any system and
>>>>>     dating
>>>>>      >     back from before May 2018, in order to lower the number of
>>>>>     channels
>>>>>      >     in the queue. There were 127 channels in the queue before
>>>>> these
>>>>>      >     deletion (indicated
>>>>>     in /var/log/rhn/rhn_taskomatic_daemon.log), and
>>>>>      >     there are 361 of them now ... I must admit I'm confused... I
>>>>>     hoped
>>>>>      >     it would reduce the number of channels to process and thus
>>>>> "help"
>>>>>      >     taskomatic, but obviously I was wrong.
>>>>>      >
>>>>>      >     I also noticed that the repodata regeneration seems to work
>>>>>     fine for
>>>>>      >     existing channels that are not clones, but it is not working
>>>>>     for new
>>>>>      >     channels that are not clones (and not working for new
>>>>> clones but
>>>>>      >     nothing new here).
>>>>>      >
>>>>>      >     Has anyone got any other idea (even the tiniest) ?
>>>>>      >
>>>>>      >     Regards,
>>>>>      >     Florence
>>>>>      >
>>>>>      >
>>>>>      >     2018-07-04 15:21 GMT+02:00 Paul Dias - BCX
>>>>>     <paul.dias at bcx.co.za <mailto:paul.dias at bcx.co.za>
>>>>>      >     <mailto:paul.dias at bcx.co.za <mailto:paul.dias at bcx.co.za>>>:
>>>>>      >
>>>>>      >         Hi,____
>>>>>      >
>>>>>      >         __ __
>>>>>      >
>>>>>      >         Let me post my settings that I have on my CentOS6
>>>>> server.
>>>>>     Can’t
>>>>>      >         remember but I have one or two others, but his is from
>>>>>     the top
>>>>>      >         of my head.____
>>>>>      >
>>>>>      >         __ __
>>>>>      >
>>>>>      >         /etc/rhn/rhn.conf____
>>>>>      >
>>>>>      >         # Added by paul dias increase number of taskomatic
>>>>> workers
>>>>>      >         20180620____
>>>>>      >
>>>>>      >         taskomatic.channel_repodata_workers = 3____
>>>>>      >
>>>>>      >         taskomatic.java.maxmemory=4096____
>>>>>      >
>>>>>      >         __ __
>>>>>      >
>>>>>      >         /etc/sysconfig/tomcat6____
>>>>>      >
>>>>>      >         JAVA_OPTS="-ea -Xms256m -Xmx512m
>>>>> -Djava.awt.headless=true
>>>>>      >         -Dorg.xml.sax.driver=org.apac
>>>>> he.xerces.parsers.SAXParser
>>>>>      >         -Dorg.apache.tomcat.util.http.Parameters.MAX_COUNT=1024
>>>>>      >         -XX:MaxNewSize=256 -XX:-UseConcMarkSweepGC
>>>>>      >         -Dnet.sf.ehcache.skipUpdateCheck=true
>>>>>      >       -Djavax.sql.DataSource.Factor
>>>>> y=org.apache.commons.dbcp.BasicDataSourceFactory"____
>>>>>
>>>>>      >
>>>>>      >         __ __
>>>>>      >
>>>>>      >         /etc/tomcat/server.xml____
>>>>>      >
>>>>>      >         <!-- Define an AJP 1.3 Connector on port 8009 -->____
>>>>>      >
>>>>>      >              <Connector port="8009" protocol="AJP/1.3"
>>>>>      >         redirectPort="8443" URIEncoding="UTF-8"
>>>>> address="127.0.0.1"
>>>>>      >         maxThreads="256" connectionTimeout="20000"/>____
>>>>>      >
>>>>>      >         __ __
>>>>>      >
>>>>>      >              <Connector port="8009" protocol="AJP/1.3"
>>>>>      >         redirectPort="8443" URIEncoding="UTF-8" address="::1"
>>>>>      >         maxThreads="256" connectionTimeout="20000"/>____
>>>>>      >
>>>>>      >         __ __
>>>>>      >
>>>>>      >         /usr/share/rhn/config-default
>>>>> s/rhn_taskomatic_daemon.conf____
>>>>>      >
>>>>>      >         # Initial Java Heap Size (in MB)____
>>>>>      >
>>>>>      >         wrapper.java.initmemory=512____
>>>>>      >
>>>>>      >         __ __
>>>>>      >
>>>>>      >         # Maximum Java Heap Size (in MB)____
>>>>>      >
>>>>>      >         wrapper.java.maxmemory=1512____
>>>>>      >
>>>>>      >         # Adjusted by paul 20180620____
>>>>>      >
>>>>>      >         __ __
>>>>>      >
>>>>>      >         wrapper.ping.timeout=0____
>>>>>      >
>>>>>      >         # # adjusted paul dias 20180620____
>>>>>      >
>>>>>      >         __ __
>>>>>      >
>>>>>      >         __ __
>>>>>      >
>>>>>      >         __ __
>>>>>      >
>>>>>      >         __ __
>>>>>      >
>>>>>      >         __ __
>>>>>      >
>>>>>      >         __ __
>>>>>      >
>>>>>      >         __ __
>>>>>      >
>>>>>      >         __ __
>>>>>      >
>>>>>      >         __ __
>>>>>      >
>>>>>      >         __ __
>>>>>      >
>>>>>      >         __ __
>>>>>      >
>>>>>      >         __ __
>>>>>      >
>>>>>      >         __ __
>>>>>      >
>>>>>      >         __ __
>>>>>      >
>>>>>      >         __ __
>>>>>      >
>>>>>      >         Regards,____
>>>>>      >
>>>>>      >         *Paul Dias____*
>>>>>      >
>>>>>      >         Technical Consultant____
>>>>>      >
>>>>>      >         6^th Floor, 8 Boundary Road____
>>>>>      >
>>>>>      >         Newlands____
>>>>>      >
>>>>>      >         Cape Town____
>>>>>      >
>>>>>      >         7700____
>>>>>      >
>>>>>      >         T: +27 (0) 21 681 3149 <tel:+27%2021%20681%203149>
>>>>>     <tel:+27%2021%20681%203149>____
>>>>>      >
>>>>>      >         *Meet your future today.____*
>>>>>      >
>>>>>      >         *__ __*
>>>>>      >
>>>>>      >         __BCX______
>>>>>      >
>>>>>      >         __ __
>>>>>      >
>>>>>      >         __ __
>>>>>      >
>>>>>      >         __ __
>>>>>      >
>>>>>      >         __Social-facebook
>>>>>      >         <https://www.facebook.com/BCXworld>____Social-twitter
>>>>>      >         <https://twitter.com/BCXworld>____Socia
>>>>> <https://maps.google.com/?q=l-facebook%0D%0A%3E%3E%3E+%C2%A0%C2%A0%C2%A0%C2%A0+%3E%C2%A0+%C2%A0+%C2%A0+%C2%A0+%C2%A0+____Social-twitter%0D%0A%3E%3E%3E+%C2%A0%C2%A0%C2%A0%C2%A0+%3E%C2%A0+%C2%A0+%C2%A0+%C2%A0+%C2%A0+____Socia&entry=gmail&source=g>
>>>>> l-linkdin
>>>>>      >         <https://za.linkedin.com/BCX>____Social-youtube
>>>>>      >         <https://www.youtube.com/BCXworld>______
>>>>>      >
>>>>>      >         __ __
>>>>>      >
>>>>>      >         __ __
>>>>>      >
>>>>>      >         This e-mail is subject to the BCX electronic
>>>>>     communication legal
>>>>>      >         notice, available at:
>>>>>      > https://www.bcx.co.za/disclaimers____
>>>>>      >
>>>>>      >         /__ __/
>>>>>      >
>>>>>      >         /__ __/
>>>>>      >
>>>>>      >         __ __
>>>>>      >
>>>>>      >         *From:*Paul Dias - BCX
>>>>>      >         *Sent:* 02 July 2018 06:53 PM
>>>>>      >
>>>>>      >
>>>>>      >         *To:* spacewalk-list at redhat.com
>>>>>     <mailto:spacewalk-list at redhat.com> <mailto:spacewalk-list at redhat.
>>>>> com
>>>>>     <mailto:spacewalk-list at redhat.com>>
>>>>>      >         *Subject:* Re: [Spacewalk-list] Taskomatic runs
>>>>> indefinitely
>>>>>      >         without ever generating repodata____
>>>>>      >
>>>>>      >         __ __
>>>>>      >
>>>>>      >         What I have noticed, if you use
>>>>>      >         "spacecmd softchannel_generateyumcache <channel name>"
>>>>>     and then
>>>>>      >         go to tasks and run single repodata bunch, you will see
>>>>>     it will
>>>>>      >         actually start and generate your channel cache for you
>>>>> on the
>>>>>      >         channel you used the spacecmd  on, this works every
>>>>> time.____
>>>>>      >
>>>>>      >         __ __
>>>>>      >
>>>>>      >         But yes the task logs just show repodata bunch running
>>>>>     forever.____
>>>>>      >
>>>>>      >         __ __
>>>>>      >
>>>>>      >         Regards,____
>>>>>      >
>>>>>      >         *Paul Dias*____
>>>>>      >
>>>>>      >         6^th  Floor, 8 Boundary Road____
>>>>>      >
>>>>>      >         Newlands____
>>>>>      >
>>>>>      >         Cape Town____
>>>>>      >
>>>>>      >         7700____
>>>>>      >
>>>>>      >         T: +27 (0) 21 681 3149 <tel:+27%2021%20681%203149>
>>>>>     <tel:+27%2021%20681%203149>____
>>>>>      >
>>>>>      >         __ __
>>>>>      >
>>>>>      >         *Meet your future today.*____
>>>>>      >
>>>>>      >         **____
>>>>>      >
>>>>>      >         BCX____
>>>>>      >
>>>>>      >         __ __
>>>>>      >
>>>>>      >       -----------------------------
>>>>> -------------------------------------------
>>>>>      >
>>>>>      >         *From:*Gerald Vogt <vogt at spamcop.net
>>>>>     <mailto:vogt at spamcop.net> <mailto:vogt at spamcop.net
>>>>>     <mailto:vogt at spamcop.net>>>
>>>>>      >         *Sent:* Monday, 02 July 2018 9:45 AM
>>>>>      >         *To:* spacewalk-list at redhat.com
>>>>>     <mailto:spacewalk-list at redhat.com> <mailto:spacewalk-list at redhat.
>>>>> com
>>>>>     <mailto:spacewalk-list at redhat.com>>
>>>>>      >         *Subject:* Re: [Spacewalk-list] Taskomatic runs
>>>>> indefinitely
>>>>>      >         without ever generating repodata____
>>>>>      >
>>>>>      >         ____
>>>>>      >
>>>>>      >         After letting the upgraded server sit for a while it
>>>>>     seems only
>>>>>      >         a few of
>>>>>      >         the task schedules actually finish. By now, only those
>>>>> tasks
>>>>>      >         show up in
>>>>>      >         in the task engine status page:
>>>>>      >
>>>>>      >         Changelog Cleanup:       2018-07-01 23:00:00 CEST
>>>>>  FINISHED
>>>>>      >         Clean Log History:       2018-07-01 23:00:00 CEST
>>>>>  FINISHED
>>>>>      >         Compare Config Files:    2018-07-01 23:00:00 CEST
>>>>>  FINISHED
>>>>>      >         Daily Summary Mail:      2018-07-01 23:00:00 CEST
>>>>>  FINISHED
>>>>>      >         Daily Summary Queue:     2018-07-01 23:00:00 CEST
>>>>>  FINISHED
>>>>>      >
>>>>>      >         All the other tasks have disappeared from the list by
>>>>> now.
>>>>>      >
>>>>>      >         The repo-sync tasks seem to work. New packages appear
>>>>> in the
>>>>>      >         channel.
>>>>>      >         However, the repo build is not running or better it
>>>>> seems
>>>>>     to never
>>>>>      >         properly finish.
>>>>>      >
>>>>>      >         If I start it manually, it seems to do its work:
>>>>>      >
>>>>>      >         > INFO   | jvm 1    | 2018/07/02 08:13:10 | 2018-07-02
>>>>>     08:13:10,584 [Thread-12] INFO     com.redhat.rhn.taskomatic.TaskoQuartzHelper
>>>>> - Job
>>>>>     single-channel-repodata-bunch-0 scheduled succesfully.
>>>>>      >         > INFO   | jvm 1    | 2018/07/02 08:13:10 | 2018-07-02
>>>>>     08:13:10,636 [DefaultQuartzScheduler_Worker-8] INFO
>>>>>  com.redhat.rhn.taskomatic.TaskoJob -
>>>>>     single-channel-repodata-bunch-0: bunch channel-repodata-bunch
>>>>> STARTED
>>>>>      >         > INFO   | jvm 1    | 2018/07/02 08:13:10 | 2018-07-02
>>>>>     08:13:10,651 [DefaultQuartzScheduler_Worker-8] DEBUG
>>>>>     com.redhat.rhn.taskomatic.TaskoJob -
>>>>>     single-channel-repodata-bunch-0: task channel-repodata started
>>>>>      >         > INFO   | jvm 1    | 2018/07/02 08:13:10 | 2018-07-02
>>>>>     08:13:10,793 [DefaultQuartzScheduler_Worker-8] INFO
>>>>>  com.redhat.rhn.taskomatic.task.ChannelRepodata - In the queue: 4
>>>>>      >         > INFO   | jvm 1    | 2018/07/02 08:13:11 | 2018-07-02
>>>>>     08:13:11,102 [DefaultQuartzScheduler_Worker-8] DEBUG
>>>>>     com.redhat.rhn.taskomatic.TaskoJob - channel-repodata
>>>>>     (single-channel-repodata-bunch-0) ... running
>>>>>      >         > INFO   | jvm 1    | 2018/07/02 08:13:11 | 2018-07-02
>>>>>     08:13:11,103 [DefaultQuartzScheduler_Worker-8] INFO
>>>>>  com.redhat.rhn.taskomatic.TaskoJob -
>>>>>     single-channel-repodata-bunch-0: bunch channel-repodata-bunch
>>>>> FINISHED
>>>>>      >         > INFO   | jvm 1    | 2018/07/02 08:13:11 | 2018-07-02
>>>>>     08:13:11,137 [Thread-677] INFO     com.redhat.rhn.taskomatic.task.repomd.RepositoryWriter
>>>>> - File
>>>>>     Modified Date:2018-06-23 03:48:50 CEST
>>>>>      >         > INFO   | jvm 1    | 2018/07/02 08:13:11 | 2018-07-02
>>>>>     08:13:11,137 [Thread-677] INFO     com.redhat.rhn.taskomatic.task.repomd.RepositoryWriter
>>>>> - Channel
>>>>>     Modified Date:2018-07-02 03:45:39 CEST
>>>>>      >         > INFO   | jvm 1    | 2018/07/02 08:13:11 | 2018-07-02
>>>>>     08:13:11,211 [Thread-678] INFO     com.redhat.rhn.taskomatic.task.repomd.RepositoryWriter
>>>>> - File
>>>>>     Modified Date:2018-06-23 04:09:51 CEST
>>>>>      >         > INFO   | jvm 1    | 2018/07/02 08:13:11 | 2018-07-02
>>>>>     08:13:11,213 [Thread-678] INFO     com.redhat.rhn.taskomatic.task.repomd.RepositoryWriter
>>>>> - Channel
>>>>>     Modified Date:2018-07-02 03:47:55 CEST
>>>>>      >         > INFO   | jvm 1    | 2018/07/02 08:13:19 | 2018-07-02
>>>>>     08:13:19,062 [Thread-677] INFO     com.redhat.rhn.taskomatic.task.repomd.RepositoryWriter
>>>>> - Generating
>>>>>     new repository metadata for channel 'epel6-centos6-x86_64'(sha1)
>>>>>     14401 packages, 11613 errata
>>>>>      >         > INFO   | jvm 1    | 2018/07/02 08:13:21 | 2018-07-02
>>>>>     08:13:21,193 [Thread-678] INFO     com.redhat.rhn.taskomatic.task.repomd.RepositoryWriter
>>>>> - Generating
>>>>>     new repository metadata for channel 'epel7-centos7-x86_64'(sha1)
>>>>>     16282 packages, 10176 errata
>>>>>      >         > INFO   | jvm 1    | 2018/07/02 08:40:12 | 2018-07-02
>>>>>     08:40:12,351 [Thread-677] INFO     com.redhat.rhn.taskomatic.task.repomd.RepositoryWriter
>>>>> - Repository
>>>>>     metadata generation for 'epel6-centos6-x86_64' finished in 1613
>>>>> seconds
>>>>>      >         > INFO   | jvm 1    | 2018/07/02 08:40:12 | 2018-07-02
>>>>>     08:40:12,457 [Thread-677] INFO     com.redhat.rhn.taskomatic.task.repomd.RepositoryWriter
>>>>> - File
>>>>>     Modified Date:2018-06-19 06:28:57 CEST
>>>>>      >         > INFO   | jvm 1    | 2018/07/02 08:40:12 | 2018-07-02
>>>>>     08:40:12,457 [Thread-677] INFO     com.redhat.rhn.taskomatic.task.repomd.RepositoryWriter
>>>>> - Channel
>>>>>     Modified Date:2018-07-02 04:30:05 CEST
>>>>>      >         > INFO   | jvm 1    | 2018/07/02 08:40:12 | 2018-07-02
>>>>>     08:40:12,691 [Thread-677] INFO     com.redhat.rhn.taskomatic.task.repomd.RepositoryWriter
>>>>> - Generating
>>>>>     new repository metadata for channel
>>>>>     'postgresql96-centos7-x86_64'(sha256) 1032 packages, 0 errata
>>>>>      >         > INFO   | jvm 1    | 2018/07/02 08:41:51 | 2018-07-02
>>>>>     08:41:51,710 [Thread-677] INFO     com.redhat.rhn.taskomatic.task.repomd.RepositoryWriter
>>>>> - Repository
>>>>>     metadata generation for 'postgresql96-centos7-x86_64' finished in
>>>>> 98
>>>>>     seconds
>>>>>      >         > INFO   | jvm 1    | 2018/07/02 08:41:51 | 2018-07-02
>>>>>     08:41:51,803 [Thread-677] INFO     com.redhat.rhn.taskomatic.task.repomd.RepositoryWriter
>>>>> - File
>>>>>     Modified Date:2018-06-20 05:08:38 CEST
>>>>>      >         > INFO   | jvm 1    | 2018/07/02 08:41:51 | 2018-07-02
>>>>>     08:41:51,803 [Thread-677] INFO     com.redhat.rhn.taskomatic.task.repomd.RepositoryWriter
>>>>> - Channel
>>>>>     Modified Date:2018-07-02 04:00:00 CEST
>>>>>      >         > INFO   | jvm 1    | 2018/07/02 08:41:51 | 2018-07-02
>>>>>     08:41:51,923 [Thread-677] INFO     com.redhat.rhn.taskomatic.task.repomd.RepositoryWriter
>>>>> - Generating
>>>>>     new repository metadata for channel
>>>>>     'postgresql10-centos6-x86_64'(sha512) 436 packages, 0 errata
>>>>>      >         > INFO   | jvm 1    | 2018/07/02 08:42:26 | 2018-07-02
>>>>>     08:42:26,479 [Thread-677] INFO     com.redhat.rhn.taskomatic.task.repomd.RepositoryWriter
>>>>> - Repository
>>>>>     metadata generation for 'postgresql10-centos6-x86_64' finished in
>>>>> 34
>>>>>     seconds
>>>>>      >         > INFO   | jvm 1    | 2018/07/02 08:45:01 | 2018-07-02
>>>>>     08:45:01,697 [Thread-678] INFO     com.redhat.rhn.taskomatic.task.repomd.RepositoryWriter
>>>>> - Repository
>>>>>     metadata generation for 'epel7-centos7-x86_64' finished in 1900
>>>>> seconds
>>>>>      >
>>>>>      >         yet, the task remains in RUNNING. And for whatever
>>>>> reason it
>>>>>      >         only seems
>>>>>      >         to work some channels. I find a total of 20 repos
>>>>> syncing
>>>>>     in the
>>>>>      >         logs of
>>>>>      >         the updated server compared to 42 repos syncing in the
>>>>>     logs of
>>>>>      >         the old.
>>>>>      >         I don't really see the difference between those 20 repos
>>>>>     syncing
>>>>>      >         and
>>>>>      >         those other 22 not. First I suspected channels with
>>>>>     custom quartz
>>>>>      >         schedules, but then I found channels in both groups.
>>>>>      >
>>>>>      >         So I don't know how to troubleshoot this any further.
>>>>> The
>>>>>      >         repodata task
>>>>>      >         which I have started 1,5 hours ago is still at
>>>>> "RUNNING". The
>>>>>      >         channels
>>>>>      >         for which the sync works have been updated. I don't know
>>>>>     why it
>>>>>      >         is still
>>>>>      >         running. Server load is back down...
>>>>>      >
>>>>>      >         Thanks,
>>>>>      >
>>>>>      >         Gerald
>>>>>      >
>>>>>      >         On 22.06.18 19:12, Gerald Vogt wrote:
>>>>>      >         > I have the same problem after upgrading from 2.6 to
>>>>> 2.8
>>>>>     on CentOS 6.9. I
>>>>>      >         > have even increased the memory as suggested by that
>>>>>     link but it makes no
>>>>>      >         > differences. None of the scheduled tasks are running.
>>>>> I
>>>>>     can run a bunch
>>>>>      >         > manually. But the scheduler doesn't seem to work. Last
>>>>>     execution times
>>>>>      >         > on the task engine status pages are still at
>>>>> timestamps
>>>>>     from before the
>>>>>      >         > upgrade. -Gerald
>>>>>      >         >
>>>>>      >         >
>>>>>      >         >
>>>>>      >         > On 22.06.18 14:15, Avi Miller wrote:
>>>>>      >         >> Hi,
>>>>>      >         >>
>>>>>      >         >>> On 22 Jun 2018, at 5:51 pm, Florence Savary
>>>>>      >         >>> <florence.savary.fs at gmail.com
>>>>>     <mailto:florence.savary.fs at gmail.com>
>>>>>      >         <mailto:florence.savary.fs at gmail.com
>>>>>     <mailto:florence.savary.fs at gmail.com>>> wrote:
>>>>>      >         >>>
>>>>>      >         >>> When using taskotop, we can see a line for the
>>>>>     channel-repodata task,
>>>>>      >         >>> we see it is running, but there is never any channel
>>>>>     displayed in the
>>>>>      >         >>> Channel column. We can also see the task marked as
>>>>>     running in the
>>>>>      >         >>> Admin tab of the WebUI, but if we let it, it never
>>>>>     stops. The task
>>>>>      >         >>> runs indefinitely, whithout ever doing anything.
>>>>>      >         >>
>>>>>      >         >> If you've never modified the default memory settings,
>>>>>     Taskomatic is
>>>>>      >         >> probably running out of memory and task is crashing.
>>>>>     This is a known
>>>>>      >         >> issue, particularly when you sync large repos.
>>>>>      >         >>
>>>>>      >         >> I would suggest increasing the memory assigned to
>>>>>     Taskomatic to see if
>>>>>      >         >> that resolves the issue. You will need to restart it
>>>>>     after making
>>>>>      >         >> these changes:
>>>>>      >         >>
>>>>>     https://docs.oracle.com/cd/E92593_01/E90695/html/swk24-issue
>>>>> s-memory.html
>>>>>      >         >>
>>>>>      >         >> Cheers,
>>>>>      >         >> Avi
>>>>>      >         >>
>>>>>      >         >> --
>>>>>      >         >> Oracle <http://www.oracle.com>
>>>>>      >         >> Avi Miller | Product Management Director | +61 (3)
>>>>>     8616 3496 <tel:+61%203%208616%203496> <tel:+61%203%208616%203496>
>>>>>      >         >> Oracle Linux and Virtualization
>>>>>      >         >> 417 St Kilda Road, Melbourne, Victoria 3004 Australia
>>>>>      >         >>
>>>>>      >         >>
>>>>>      >         >> _______________________________________________
>>>>>      >         >> Spacewalk-list mailing list
>>>>>      >         >> Spacewalk-list at redhat.com
>>>>>     <mailto:Spacewalk-list at redhat.com> <mailto:Spacewalk-list at redhat.
>>>>> com
>>>>>     <mailto:Spacewalk-list at redhat.com>>
>>>>>      >         >> https://www.redhat.com/mailman
>>>>> /listinfo/spacewalk-list
>>>>>      >         >>
>>>>>      >         >
>>>>>      >         > _______________________________________________
>>>>>      >         > Spacewalk-list mailing list
>>>>>      >         > Spacewalk-list at redhat.com
>>>>>     <mailto:Spacewalk-list at redhat.com> <mailto:Spacewalk-list at redhat.
>>>>> com
>>>>>     <mailto:Spacewalk-list at redhat.com>>
>>>>>      >         > https://www.redhat.com/mailman
>>>>> /listinfo/spacewalk-list
>>>>>      >
>>>>>      >         ____
>>>>>      >
>>>>>      >
>>>>>      >         _______________________________________________
>>>>>      >         Spacewalk-list mailing list
>>>>>      > Spacewalk-list at redhat.com <mailto:Spacewalk-list at redhat.com>
>>>>>     <mailto:Spacewalk-list at redhat.com <mailto:Spacewalk-list at redhat.
>>>>> com>>
>>>>>      > https://www.redhat.com/mailman/listinfo/spacewalk-list
>>>>>      >
>>>>>      >
>>>>>      >     _______________________________________________
>>>>>      >     Spacewalk-list mailing list
>>>>>      > Spacewalk-list at redhat.com <mailto:Spacewalk-list at redhat.com>
>>>>>     <mailto:Spacewalk-list at redhat.com <mailto:Spacewalk-list at redhat.
>>>>> com>>
>>>>>      > https://www.redhat.com/mailman/listinfo/spacewalk-list
>>>>>      >
>>>>>      >
>>>>>      >
>>>>>      > _______________________________________________
>>>>>      > Spacewalk-list mailing list
>>>>>      > Spacewalk-list at redhat.com <mailto:Spacewalk-list at redhat.com>
>>>>>      > https://www.redhat.com/mailman/listinfo/spacewalk-list
>>>>>      >
>>>>>
>>>>>     _______________________________________________
>>>>>     Spacewalk-list mailing list
>>>>>     Spacewalk-list at redhat.com <mailto:Spacewalk-list at redhat.com>
>>>>>     https://www.redhat.com/mailman/listinfo/spacewalk-list
>>>>>
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> Spacewalk-list mailing list
>>>>> Spacewalk-list at redhat.com
>>>>> https://www.redhat.com/mailman/listinfo/spacewalk-list
>>>>>
>>>>>
>>>> _______________________________________________
>>>> Spacewalk-list mailing list
>>>> Spacewalk-list at redhat.com
>>>> https://www.redhat.com/mailman/listinfo/spacewalk-list
>>>>
>>>
>>>
>>>
>>>
>>> _______________________________________________
>>> Spacewalk-list mailing list
>>> Spacewalk-list at redhat.com
>>> https://www.redhat.com/mailman/listinfo/spacewalk-list
>>>
>>>
>>
>> _______________________________________________
>> Spacewalk-list mailing list
>> Spacewalk-list at redhat.com
>> https://www.redhat.com/mailman/listinfo/spacewalk-list
>>
>
>
> _______________________________________________
> Spacewalk-list mailing list
> Spacewalk-list at redhat.com
> https://www.redhat.com/mailman/listinfo/spacewalk-list
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/spacewalk-list/attachments/20180720/48b1ce99/attachment.htm>


More information about the Spacewalk-list mailing list