[Linux-cluster] RH Cluster Suit can be used to create a qmail cluster?

Roger Peña orkcu at yahoo.com
Wed Jun 20 00:42:11 UTC 2007


--- Rainer Duffner <rainer at ultra-secure.de> wrote:

> 
> Am 19.06.2007 um 23:19 schrieb Roger Peña:
> 
> > Hi
> >
> > I am looking for ideas about to create a Qmail HA
> > cluster with 2 nodes and the storage in a SAN (FC
> > access)
> >
> 
> 
> Only two nodes?
> What backend do you want to use?
> (In case you want to use vpopmail)
backend for what? for user data? we plan to use ldap,
another two more server to the cluster,  but I was
talking about just mail related (smtp, pop-imap) nodes

> 
> 
> > right now I am in the design stage, mainly finding
> > potencial problems so ....
> > do anybody has anything to recommend ?
> 
> 
> Qmail is IMO not suited for a GFS cluster.
> GFS tries its best to keep write operations on the
> cluster-FS  
> synchronized.
> This is useless in the case of Qmail, because Qmail
> is designed to  
> function even on NFS-filesystems without any kind of
> useful locking.
> In GFS-land, Qmail just generates lots of useless
> I/O.
you are thinking about maildir advantages? yeap, I
know that with maildir you will not have locking
problems (practical meaning although there is a
teorical chance :-) )
but I was thinking in FS cache, default config for
ext3 are not  suitable, maybe I should go into the
tunning ext3 area but .... I thought GFS will take
care of FS sincronization more easy than tunning ext3
to not make cache or just do it for little time

> 
> 
> > (except not use qmail ;-) I would like to use
> postfix
> > or exim but my client disagree :-( no choice here)
> >
> 
> 
> It's understandable. Qmail still offers a lot of
> value when it comes  
> to virtual email-domain hosting - though the
> original DJB-Qmail is  
> barely usable today.
> But people like Matt Simerson and Bill Shupp have
> done tremendous  
> integration-work, and helped to keep the platform on
> par (or in some  
> cases beyond) with other systems, even commercial
> ones.
> 
> 
> > my first problem looks like qmail is started,
> > monitored and managed by daemontools (sv*
> programs)
> > and svscan itseft is started through inittab or
> > rc.local
> > so my first approach is to create an sysV init
> script
> > for svscanboot (whitch is used to start svc and
> > svscan) and that script is the one that will be
> > controlled by RHCS as a script resource (alonside
> with
> > the GFS or plain FS resource, and maybe the IP
> > resource)
> >
> 
> 
> Sometimes, it's not enough to stop the
> svscan-startscript.
> Daemons linger around, prevent new ones from
> starting. After killing  
> the start-scripts, it might be necessary to kill (or
> kill -9) any  
> remaining processes.

good to know it :-)
I will be looking for this problem :-)

> 
> 
> > so, my idea is to "clusterizate" (that word exist
> ?
> > ;-) ) the daemontool and not the qmail process, do
> you
> > agree?
> >
> > thanks in advance for any tip :-)
> >
> 
> 
> You could try to run a sharedroot-cluster on RHEL4
> and see how it  
> performs for your workload - there are some
> succesful reports here on  
> this list (though the one I remember uses a
> tremendous amount of disk- 
> spindles).
> This should solve your problems with the script
> (just fence the whole  
> node - finished).
> 
> If you don't want to go that route, I'd say forget
> about GFS and go  
> back to NFS (with a serious NFS server-platform like
> Solaris and  
> clients like Solaris or FreeBSD) - see the picture

another requisite for the solution:
the OS has to be RHEL, RHEL5 as the preferred



> on Bill Shupp's  
> homepage for a design.
> Matt Simerson's formerly FreeBSD-only (now also
> Solaris, Linux,  
> Darwin) Mail-Toaster framework already contains most
> of the  
> integration-work necessary (distribute configfiles
> etc. - take a look  
> at the source, it's amazing).

I will do 

> 
> Above a certain amount of users (500k, probably
> varies), shared- 
> storage may be the wrong answer anyway.
> Then, a distributed setup might be better suited.
> How many users will you have to support?

I guess few hundreds of thousands but I hope not 500k,
maybe 200k or 300k
I know this is an important data to be uncertain but
as I said I am in the process of finding potentials
problems yet :-) in the next few days-weeks I will
have more deep understand of the environment


> Rainer
thanks a lot Rainier

cu
roger

__________________________________________
RedHat Certified ( RHCE )
Cisco Certified ( CCNA & CCDA )


       
____________________________________________________________________________________
Be a better Globetrotter. Get better travel answers from someone who knows. Yahoo! Answers - Check it out.
http://answers.yahoo.com/dir/?link=list&sid=396545469




More information about the Linux-cluster mailing list