[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1ae5c450-1ffe-cf15-e878-b40f30c0acc3@gblabs.co.uk>
Date: Fri, 1 Jun 2018 14:24:23 +0100
From: Alex Richman <alex.r@...abs.co.uk>
To: Michal Hocko <mhocko@...nel.org>
Cc: linux-man@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: mlock() confusing 1 half of system RAM limit
Ah, that's it. Increased the limit on the mount and it works fine now.
Thanks!
- Alex.
On 01/06/18 14:05, Michal Hocko wrote:
> On Fri 01-06-18 13:26:59, Alex Richman wrote:
>> I am using a shm MAP_SHARED, along these lines:
>>> shm_fd = shm_open(handle, (O_RDWR | O_CREAT), (S_IRWXU | S_IRWXG |
>> S_IRWXO));
>>> ftruncate(shm_fd, channel->sled_size)
>>> channel->sled = mmap(NULL, channel->sled_size, (PROT_READ | PROT_WRITE),
>>> (MAP_SHARED | MAP_NORESERVE), shm_fd, 0);
>>> mlock(channel->sled, channel->sled_size) /* Fails with ENOMEM. */
>> But shmmax is unlimited on my box:
>> # sysctl -a | grep shm
>> kernel.shm_next_id = -1
>> kernel.shm_rmid_forced = 0
>> kernel.shmall = 18446744073692774399
>> kernel.shmmax = 18446744073692774399
>> kernel.shmmni = 4096
>>
>> Any ideas?
> shm_open uses tmpfs/shmem under the cover and that has the internal
> limit as explained above.
--
Alex Richman
alex.r@...abs.co.uk
Engineering
GB Labs
2 Orpheus House,
Calleva park,
Reading
RG7 8TA
Tel:+44 (0)118 455 5000
www.gblabs.com
The information contained in this message and any attachment may be proprietary, confidential and privileged.
If the reader of this message is not the intended recipient, or an employee or agent responsible for delivering
this message to the intended recipient, you are hereby notified that any dissemination, distribution or copying
of this communication is strictly prohibited. If you received this communication in error, please contact me immediately,
and delete the communication (including attachments, if applicable) from any computer or network system.
Powered by blists - more mailing lists