[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: up2date and yum: failover mode?



Le mer 22/10/2003 à 19:41, seth vidal a écrit :
> > But if url://somewhere/somepath is successful, it wouldn't check the
> > next repo, would it?
> > 
> 
> that's what roundrobin vs priority is for.
> 
> priority = go through the lists in order.
> roundrobin = select one at random and progress through them in sequence
> from there.
> 
> and if you succeed in connecting to one server why would it want to
> check the next baseurl?
> 
> 
> > You could combine the failover with the "conventional" multirepo
> > access of yum like
> > 
> > [fastbutabitoutdated]
> > name=fastbutabitoutdated
> > baseurl=url://somewhere/somepath
> > 
> > [master]
> > name=masterbutfailessometimes
> > baseurl=url://somewhereelse/somepath
> > 	url://somewhere/somepath
> > 
> > i.e. put your local mirror at front and as a failover to the master
> > mirror, ensuring manximum bandwidth when the package is available at
> > the local mirror and also being up to date with failover.
> 
> huh? why not just include them in one repo and treat them in priority
> failover.
> 
> I'm not sure what you're goal is above - if the repos are not identical
> then they shouldn't be treated as the same.

The repositories are the same - they are just potentially out of sync.
The aim is to maximize acces to local/close/deep mirrors without losing
ability to get the latest parts from the master site (accessing the
master site is costly for the upstream source and potentially for the
user if it means leaving an intranet through a congested shared pipe).

A realistic scenario is :
1. search in a local partial source (download cache, local cd...)
2. search in the intranet mirror
3. search in several of the official project mirrors
4. search in any of the official upstream download sites
5. download packages with 1-2-3-4 priority. More recent packages always
win, identical versions must be sourced from the closest/fastest source
possible.

All the download manager should need is a list of the X upstream level 1
sources, a list of all known mirror sources (sometimes a few 100 sites)
and be able to choose by itself 2-3 mirrors it'll poll and one upstream
source to check for completeness.

What mirrors to hit can be determined with response time checks and
keeping stats on the bandwidth achieved/level of freshness of previous
accesses. (keeping track of the access hours might be smart too since
net topology radically changes when the US wakes up for example, plus by
correlating relative freshness with dates the program might even learn
each mirror sync hours over time)

All those checks are what a user is supposed to do manually before
putting a particular mirror in his config file. There is no reason
automating it can not give better results and produce more efficient
ressource usage patterns for everyone.

(plus this removes the manual configuration burden from the user)

I hope this was clearer this time.

Regards,

-- 
Nicolas Mailhot

Attachment: signature.asc
Description: Ceci est une partie de message=?ISO-8859-1?Q?num=E9riquement?= =?ISO-8859-1?Q?_sign=E9e=2E?=


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]