[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4911EA50.30905@gmail.com>
Date: Wed, 05 Nov 2008 20:47:44 +0200
From: Török Edwin <edwintorok@...il.com>
To: Hugh Dickins <hugh@...itas.com>
CC: Linux Kernel <linux-kernel@...r.kernel.org>
Subject: Re: /proc/pid/maps containg anonymous maps that have PROT_NONE
On 2008-11-05 19:56, Török Edwin wrote:
> On 2008-11-05 18:12, Hugh Dickins wrote:
>
>>>
>>>
>> mmap PROT_NONE to reserve an arena, munmap to trim off top and bottom,
>> mprotect to make areas read+writable, madvise 0x4 to say MADV_DONTNEED
>> on some parts. gcc? Or the application itself (clamd) and its libs?
>>
>>
>>
>
>> Why does it mmap too much then trim it down? Perhaps it's trying to
>> minimize pagetable usage, perhaps it's internally convenient to base
>> on rounded addresses, I don't know.
>>
>> But the mmap is there: just easily overlooked because of the way it
>> munmaps too (with strace showing hex addresses but decimal sizes).
>>
>
> I will get some stacktraces and figure out, know that I know which mmap
> to look for (the one with MAP_NORESERVE).
>
I found it, glibc: arena.c:669 does it, there's a comment explaining why:
/* If consecutive mmap (0, HEAP_MAX_SIZE << 1, ...) calls return decreasing
addresses as opposed to increasing, new_heap would badly fragment the
address space. In that case remember the second HEAP_MAX_SIZE part
aligned to HEAP_MAX_SIZE from last mmap (0, HEAP_MAX_SIZE << 1, ...)
call (if it is already aligned) and try to reuse it next time. We need
no locking for it, as kernel ensures the atomicity for us - worst case
we'll call mmap (addr, HEAP_MAX_SIZE, ...) for some value of addr in
multiple threads, but only one will succeed. */
Anyway it is MAP_NORESERVE, and PROT_NONE so it doesn't waste physical
or swap memory.
Best regards,
--Edwin
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists