[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CA+1xoqdJLpzDi5GnqQ-4SD1rFv_XzecC2k2A-XYwp_HvuG=HGg@mail.gmail.com>
Date: Mon, 5 Mar 2012 22:13:11 +0200
From: Sasha Levin <levinsasha928@...il.com>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: linux-mm@...ck.org, Dave Jones <davej@...hat.com>,
Thomas Gleixner <tglx@...utronix.de>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Pekka Enberg <penberg@...nel.org>
Subject: Re: OOM killer even when not overcommiting
On Mon, Mar 5, 2012 at 10:04 PM, Andrew Morton
<akpm@...ux-foundation.org> wrote:
> On Mon, 05 Mar 2012 21:58:26 +0200
> Sasha Levin <levinsasha928@...il.com> wrote:
>
>> Hi all,
>
>> I assumed that when setting overcommit_memory=2 and
>> overcommit_ratio<100 that the OOM killer won't ever get invoked (since
>> we're not overcommiting memory), but it looks like I'm mistaken since
>> apparently a simple mmap from userspace will trigger the OOM killer if
>> it requests more memory than available.
>>
>> Is it how it's supposed to work? Why does it resort to OOM killing
>> instead of just failing the allocation?
>>
>> Here is the dump I get when the OOM kicks in:
>>
>> ...
>>
>> [ 3108.730350] [<ffffffff81198e4a>] mlock_vma_pages_range+0x9a/0xa0
>> [ 3108.734486] [<ffffffff8119b75b>] mmap_region+0x28b/0x510
>> ...
>
> The vma is mlocked for some reason - presumably the app is using
> mlockall() or mlock()? So the kernel is trying to instantiate all the
> pages at mmap() time.
The app may have used mlock(), but there is no swap space on the
machine (it's also a KVM guest), so it should matter, no?
Regardless, why doesn't it result in mmap() failing quietly, instead
of kicking in the OOM killer to kill the entire process?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists