[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: Poor Performance WhenNumber of Files > 1M



On Aug 09, 2007  13:04 -0700, Sean McCauliff wrote:
> >When you say "having about 100M files", does that mean "need to be
> >constantly accessing 100M files" or just "need to store a total of
> >100M files in this filesystem"?
> Likely only 10M will be accessed at any time.

If you can structure it so the 10M files that will be accessed together
are stored to disk together, then your application will work better,
no matter what the filesystem.

> >The former means you need to keep the whole working set in RAM for
> >maximum performance, about 100M * (128 + 32) = 19GB of RAM.  The
> >latter is no problem, we have ext3 filesystems with > 250M files
> >in them.

> The system has 16G of RAM; getting 32G in the future is a possibility. 
> Where do you get 128 + 32 from?  Is 128 the inode size?  this is 
> running a 64bit os.  Does that change the memory requirements?

128 = inode size, 32 = directory entry size.  there will be other overhead
as well, but this will get you into the right ballpark.

Cheers, Andreas
--
Andreas Dilger
Principal Software Engineer
Cluster File Systems, Inc.


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]