malloc and 'Active' memory
Matthijs van der Klip
matthijs at spill.nl
Mon Aug 1 11:23:10 UTC 2005
On Fri, 29 Jul 2005, Rick Stevens wrote:
> Matthijs van der Klip wrote:
> > This all raises a new question however. In the case of fillmem the Active
> > memory is released again. Why is this not happening after a configure /
> > make / make install episode? I have a particular compile ready that leaves
> > close to 1GB of Active memory after each run. After running this a couple
> > of times, the total amount of Active memory rises to close of 6GB again.
> > Does this mean something in this compile process is leaking memory? Would
> > that explain why the Active memory is not released again as soon as the
> > compiler, linker etc. have finished their work?
> >
> > I am not the first one to experience this problem by the way:
> >
> > http://lists.debian.org/debian-kernel/2004/12/msg00410.html
>
> Well, malloc() will fail if you request a chunk of memory and there
> isn't a SINGLE chunk available of that size. So if memory gets fragged,
> there isn't a single 7GB chunk available and malloc() will fail.
> fillmem allocates in smaller chunks, then releases it all so the
> memory defragger can clean things up.
I see what you mean and this was entirely my first thought when I ran into
this problem. However I was told (true or not true) that the malloc
implementation on Fedora Core 4 could not suffer from memory fragmentation
in the way I described (e.g. the same way you describe). I've checked the
documentation and found some interesting references. From the 'malloc'
manpage:
'By default, Linux follows an optimistic memory allocation strategy. This
means that when malloc() returns non-NULL there is no guarantee that
the memory really is available. This is a really bad bug. In case it
turns out that the system is out of memory, one or more processes will be
killed by the infamous OOM killer. In case Linux is employed under
circumstances where it would be less desirable to suddenly lose some
randomly picked processes, and moreover the kernel version is sufficiently
recent, one can switch off this overcommitting behavior using a command like
# echo 2 > /proc/sys/vm/overcommit_memory
See also the kernel Documentation directory, files
vm/overcommit-accounting and sysctl/vm.txt.'
Unfortunately sysctl/vm.txt from the kernel-doc-2.6.12-1.1398_FC4 rpm
describes kernel 2.2:
'This file contains the documentation for the sysctl files in
/proc/sys/vm and is valid for Linux kernel version 2.2.'
So while this is interesting with respect to the overcommitting behaviour
(that's probably why I can allocate two chunks of 6GB while physical
memory is limited at 8GB), it doesn't provide much of an answer to my
questions surrounding the 'Active' memory:
'Memory that has been used more recently and usually not reclaimed unless
absolutely necessary.'
Maybe I should let go of that and accept your explenation of the memory
fragmentation, but I'm still bothered by it. What is that 'Active' memory
doing there and why can't I use it? Furthermore why does a build process
leave so much of that darned 'Active' memory and can a single run of
fillmem 'magically' free it al?
I've been experimenting with a setting of '2' for the
/proc/sys/vm/overcommit_memory tuneable, and as it turns out the largest
block I can allocate directly after a clean boot is roughly 3.8GB. So it
seems FC4 is suffering badly from memory fragmentation after all. The
overcommit situation makes it a bit less visible, but it's still there.
> Ideally, that's what mysql should do. Or start off at some huge
> size and keep trying progressively smaller chunks until it gets some,
> e.g. try 8GB. If that fails, try 6GB, then 4, then 2, you get the
> idea. It could then link those together and manage them.
I will propose this on the MySQL list, I hope I can explain why they have
to do their own high level memory management.
Best regards,
--
Matthijs van der Klip
System Administrator
Spill E-Projects
The Netherlands
More information about the fedora-list
mailing list