[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.20.1705181154450.27641@east.gentwo.org>
Date: Thu, 18 May 2017 11:57:55 -0500 (CDT)
From: Christoph Lameter <cl@...ux.com>
To: Michal Hocko <mhocko@...nel.org>
cc: Vlastimil Babka <vbabka@...e.cz>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, cgroups@...r.kernel.org,
Li Zefan <lizefan@...wei.com>,
Mel Gorman <mgorman@...hsingularity.net>,
David Rientjes <rientjes@...gle.com>,
Hugh Dickins <hughd@...gle.com>,
Andrea Arcangeli <aarcange@...hat.com>,
Anshuman Khandual <khandual@...ux.vnet.ibm.com>,
"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
linux-api@...r.kernel.org
Subject: Re: [RFC 1/6] mm, page_alloc: fix more premature OOM due to race
with cpuset update
On Thu, 18 May 2017, Michal Hocko wrote:
> > Nope. The OOM in a cpuset gets the process doing the alloc killed. Or what
> > that changed?
!!!!!
> >
> > At this point you have messed up royally and nothing is going to rescue
> > you anyways. OOM or not does not matter anymore. The app will fail.
>
> Not really. If you can trick the system to _think_ that the intersection
> between mempolicy and the cpuset is empty then the OOM killer might
> trigger an innocent task rather than the one which tricked it into that
> situation.
See above. OOM Kill in a cpuset does not kill an innocent task but a task
that does an allocation in that specific context meaning a task in that
cpuset that also has a memory policty.
Regardless of that the point earlier was that the moving logic can avoid
creating temporary situations of empty sets of nodes by analysing the
memory policies etc and only performing moves when doing so is safe.
Powered by blists - more mailing lists