Locking shared memory

Allen, Jack Jack.Allen at McKesson.com
Wed Feb 28 14:26:59 EST 2007

-----Original Message-----
From: Allen, Jack [mailto:Jack.Allen at McKesson.com] 
Sent: Wednesday, February 28, 2007 2:00 PM
To: General Red Hat Linux discussion list
Subject: RE: Locking shared memory

-----Original Message-----
From: Mike Kearey [mailto:mkearey at redhat.com] 
Sent: Wednesday, February 28, 2007 12:56 AM
To: General Red Hat Linux discussion list
Subject: Re: Locking shared memory

Allen, Jack wrote:

> I do not think the error is because of the first explanation of ENOMEM
> So my question is based on the second one. 
> Is there a kernel or vm parameter that controls how much memory a
> process can utilize? 
> I am running RH AS 4.4 32 bit with 2GB of physical memory. 
> Thanks in advance: 
> Jack Allen 

G'day Jack,

There is memlock limit in shell, changed using the ulimit command from
bash. See ulimit -l:

$ ulimit -l

The actual hard and soft limits are controlled on a Red Hat system using
the pam_limits pam module, using the config file

Check the doc's from RHEL4 for the pam module for details :


This may well be where the ENOMEM came from, as the default limit that a
user may lock memory is 32kB.


	Thanks for the information. Since I do not want all users to
have the limit increased for them, I do not want to add and entry in
/etc/security/limits.conf. I know it can be done on a per user bases,
but any user can start the main database program that creates the shared
memory segments. And because the main program runs set UID and the owner
of the program is root, it runs as root no matter who starts. I know
there are those that think SUID programs to root are dangerous, but we
have to have it this way to make things work the way we want it to. It
is limited it what it does, in that it does not execute or create files,
etc.. arbitrarily, it has very specific things that is does.

	With that being said what I did was set the resource limit
within the program via setrlimit(RLIMIT_MEMLOCK,&rlimit). I set the
limit to 1GB and setrlimit returned a success value. But when I try to
lock the second 256MB shared memory segment address it still fails the
same way.

	The system has 2GB of memory and there is basically nothing else
running on the system other than all the standard/default processes for
various services.

	Do you or anyone else have anymore thoughts as to what may be
keeping me from locking the second shared memory segment address?

Jack Allen

	Some additional information. As I have said when I try to lock
the second shared memory address via mlock(addr,size) it fails. I have
been assuming incorrectly that that errno of ENOMEM meant I was hitting
a limit. So I decided to try some other things to make sure. Using the
same 512MB overall size for the amount of shared memory I allocated it
in 32MB segments, which gives me 16 segments. I was able to lock all but
the last segment. This would again sound like I had hit a limit. My next
test was to use 1024MB overall and use 64MB segments, which give 16
segments also. I was able to lock all but the last segment again. So
this means I was able to lock 960MB with no problem. Therefore it does
not seem to be limit of the maximum amount I can lock, because the last
test exceeded the first test by 704MB. So I suspect errno may be
indicating the wrong reason for the failure. I think it has something to
do with an address boundary or page alignment.

	Does anyone have any thoughts.

Jack Allen

redhat-list mailing list
unsubscribe mailto:redhat-list-request at redhat.com?subject=unsubscribe

More information about the redhat-list mailing list