lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.20.1705170855430.7925@east.gentwo.org>
Date:   Wed, 17 May 2017 08:56:34 -0500 (CDT)
From:   Christoph Lameter <cl@...ux.com>
To:     Michal Hocko <mhocko@...nel.org>
cc:     Vlastimil Babka <vbabka@...e.cz>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org, cgroups@...r.kernel.org,
        Li Zefan <lizefan@...wei.com>,
        Mel Gorman <mgorman@...hsingularity.net>,
        David Rientjes <rientjes@...gle.com>,
        Hugh Dickins <hughd@...gle.com>,
        Andrea Arcangeli <aarcange@...hat.com>,
        Anshuman Khandual <khandual@...ux.vnet.ibm.com>,
        "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
        linux-api@...r.kernel.org
Subject: Re: [RFC 1/6] mm, page_alloc: fix more premature OOM due to race
 with cpuset update

On Wed, 17 May 2017, Michal Hocko wrote:

> > We certainly can do that. The failure of the page faults are due to the
> > admin trying to move an application that is not aware of this and is using
> > mempols. That could be an error. Trying to move an application that
> > contains both absolute and relative node numbers is definitely something
> > that is potentiall so screwed up that the kernel should not muck around
> > with such an app.
> >
> > Also user space can determine if the application is using memory policies
> > and can then take appropriate measures (message to the sysadmin to eval
> > tge situation f.e.) or mess aroud with the processes memory policies on
> > its own.
> >
> > So this is certainly a way out of this mess.
>
> So how are you going to distinguish VM_FAULT_OOM from an empty mempolicy
> case in a raceless way?

You dont have to do that if you do not create an empty mempolicy in the
first place. The current kernel code avoids that by first allowing access
to the new set of nodes and removing the old ones from the set when done.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ