lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210908015014.GA28091@shbuild999.sh.intel.com>
Date:   Wed, 8 Sep 2021 09:50:14 +0800
From:   Feng Tang <feng.tang@...el.com>
To:     Michal Hocko <mhocko@...e.com>
Cc:     Andrew Morton <akpm@...ux-foundation.org>,
        David Rientjes <rientjes@...gle.com>,
        Mel Gorman <mgorman@...hsingularity.net>,
        Vlastimil Babka <vbabka@...e.cz>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH] mm/page_alloc: detect allocation forbidden by cpuset and
 bail out early

On Tue, Sep 07, 2021 at 10:44:32AM +0200, Michal Hocko wrote:
> On Tue 07-09-21 16:25:50, Feng Tang wrote:
> > There was report that starting an Ubuntu in docker while using cpuset
> > to bind it to movlabe nodes (a node only has movable zone, like a node
> 
> s@...labe@...able@

will change.

> > for hotplug or a Persistent Memory  node in normal usage) will fail
> > due to memory allocation failure, and then OOM is involved and many
> > other innocent processes got killed. It can be reproduced with command:
> > $docker run -it --rm  --cpuset-mems 4 ubuntu:latest bash -c
> > "grep Mems_allowed /proc/self/status" (node 4 is a movable node)
> > 
> > The reason is, in the case, the target cpuset nodes only have movable
> > zone, while the creation of an OS in docker sometimes needs to allocate
> > memory in non-movable zones (dma/dma32/normal) like GFP_HIGHUSER, and
> > the cpuset limit forbids the allocation, then out-of-memory killing is
> > involved even when normal nodes and movable nodes both have many free
> > memory.
> 
> It would be great to add a oom report here as an example.
 
Ok, will add

> > The failure is reasonable, but still there is one problem, that when
> > the usage fails as it's an mission impossible due to the cpuset limit,
> > the allocation should just not trigger reclaim/compaction, and more
> > importantly, not get any innocent process oom-killed.
> 
> I would reformulate to something like:
> "
> The OOM killer cannot help to resolve the situation as there is no
> usable memory for the request in the cpuset scope. The only reasonable
> measure to take is to fail the allocation right away and have the caller
> to deal with it.
> "

thanks! will use this.

> > So add detection for cases like this in the slowpath of allocation,
> > and bail out early returning NULL for the allocation.
> > 
> > We've run some cases of malloc/mmap/page_fault/lru-shm/swap from
> > will-it-scale and vm-scalability, and didn't see obvious performance
> > change (all inside +/- 1%), test boxes are 2 socket Cascade Lake and
> > Icelake servers.
> > 
> > [thanks to Micho Hocko and David Rientjes for suggesting not handle
> >  it inside OOM code]
> 
> While this is a good fix from the functionality POV I believe you can go
> a step further. Please add a detection to the cpuset code and complain
> to the kernel log if somebody tries to configure movable only cpuset.
> Once you have that in place you can easily create a static branch for
> cpuset_insane_setup() and have zero overhead for all reasonable
> configuration. There shouldn't be any reason to pay a single cpu cycle
> to check for something that almost nobody does.
> 
> What do you think?

I thought about the implementation, IIUC, the static_branch_enable() is
easy, it could be done when cpuset.mems is set with movable only nodes,
but disable() is much complexer, as we may need a global reference
counter to track the set/unset, and the unset could be the time when
freeing the cpuset data structure, also one cpuset.mems could be changed
runtime, and system could have multiple cpuset dirs (user space usage
could be creative or crazy :)).

While checking cpuset code, I thought more about configuring cpuset with
movable only nodes, that we may still have normal usage: mallocing a big
trunk of memory and do some scientific calculation, or AI training. It
works with current code.

The usage of using docker to start an full OS is a much complexer case,
some of its memory allocations like GFP_HIGHUSER from pipe_write() or 
copy_process() are limited by the cpuset limit.

Thanks,
Feng

> -- 
> Michal Hocko
> SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ