[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.00.1002161850540.3106@chino.kir.corp.google.com>
Date: Tue, 16 Feb 2010 18:58:17 -0800 (PST)
From: David Rientjes <rientjes@...gle.com>
To: KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
cc: Andrew Morton <akpm@...ux-foundation.org>,
Rik van Riel <riel@...hat.com>,
Nick Piggin <npiggin@...e.de>,
Andrea Arcangeli <aarcange@...hat.com>,
Balbir Singh <balbir@...ux.vnet.ibm.com>,
Lubos Lunak <l.lunak@...e.cz>,
KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [patch -mm 4/9 v2] oom: remove compulsory panic_on_oom mode
On Wed, 17 Feb 2010, KAMEZAWA Hiroyuki wrote:
> > We want to lock all populated zones with ZONE_OOM_LOCKED to avoid
> > needlessly killing more than one task regardless of how many memcgs are
> > oom.
> >
> Current implentation archive what memcg want. Why remove and destroy memcg ?
>
I've updated my patch to not take ZONE_OOM_LOCKED for any zones on memcg
oom. I'm hoping that you will add sysctl_panic_on_oom == 2 for this case
later, however.
> What I mean is
> - What VM_FAULT_OOM means is not "memory is exhausted" but "something is exhausted".
>
> For example, when hugepages are all used, it may return VM_FAULT_OOM.
> Especially when nr_overcommit_hugepage == usage_of_hugepage, it returns VM_FAULT_OOM.
>
The hugetlb case seems to be the only misuse of VM_FAULT_OOM where it
doesn't mean we simply don't have the memory to handle the page fault,
i.e. your earlier "memory is exhausted" definition. That was handled well
before calling out_of_memory() by simply killing current since we know it
is faulting hugetlb pages and its resource is limited.
We could pass the vma to pagefault_out_of_memory() and simply kill current
if its killable and is_vm_hugetlb_page(vma).
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists