[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170517140501.GM18247@dhcp22.suse.cz>
Date: Wed, 17 May 2017 16:05:01 +0200
From: Michal Hocko <mhocko@...nel.org>
To: Christoph Lameter <cl@...ux.com>
Cc: Vlastimil Babka <vbabka@...e.cz>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, cgroups@...r.kernel.org,
Li Zefan <lizefan@...wei.com>,
Mel Gorman <mgorman@...hsingularity.net>,
David Rientjes <rientjes@...gle.com>,
Hugh Dickins <hughd@...gle.com>,
Andrea Arcangeli <aarcange@...hat.com>,
Anshuman Khandual <khandual@...ux.vnet.ibm.com>,
"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
linux-api@...r.kernel.org
Subject: Re: [RFC 1/6] mm, page_alloc: fix more premature OOM due to race
with cpuset update
On Wed 17-05-17 08:56:34, Cristopher Lameter wrote:
> On Wed, 17 May 2017, Michal Hocko wrote:
>
> > > We certainly can do that. The failure of the page faults are due to the
> > > admin trying to move an application that is not aware of this and is using
> > > mempols. That could be an error. Trying to move an application that
> > > contains both absolute and relative node numbers is definitely something
> > > that is potentiall so screwed up that the kernel should not muck around
> > > with such an app.
> > >
> > > Also user space can determine if the application is using memory policies
> > > and can then take appropriate measures (message to the sysadmin to eval
> > > tge situation f.e.) or mess aroud with the processes memory policies on
> > > its own.
> > >
> > > So this is certainly a way out of this mess.
> >
> > So how are you going to distinguish VM_FAULT_OOM from an empty mempolicy
> > case in a raceless way?
>
> You dont have to do that if you do not create an empty mempolicy in the
> first place. The current kernel code avoids that by first allowing access
> to the new set of nodes and removing the old ones from the set when done.
which is racy and as Vlastimil pointed out. If we simply fail such an
allocation the failure will go up the call chain until we hit the OOM
killer due to VM_FAULT_OOM. How would you want to handle that?
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists