lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CALvZod4kRWDQuZZQ5F+z6WMcUWLwgYd-Kb0mY8UAEK4MbSOZaA@mail.gmail.com>
Date:   Wed, 21 Apr 2021 06:57:43 -0700
From:   Shakeel Butt <shakeelb@...gle.com>
To:     Michal Hocko <mhocko@...e.com>
Cc:     Johannes Weiner <hannes@...xchg.org>, Roman Gushchin <guro@...com>,
        Linux MM <linux-mm@...ck.org>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Cgroups <cgroups@...r.kernel.org>,
        David Rientjes <rientjes@...gle.com>,
        LKML <linux-kernel@...r.kernel.org>,
        Suren Baghdasaryan <surenb@...gle.com>,
        Greg Thelen <gthelen@...gle.com>,
        Dragos Sbirlea <dragoss@...gle.com>,
        Priya Duraisamy <padmapriyad@...gle.com>
Subject: Re: [RFC] memory reserve for userspace oom-killer

On Wed, Apr 21, 2021 at 12:16 AM Michal Hocko <mhocko@...e.com> wrote:
>
[...]
> > To decide when to kill, the oom-killer has to read a lot of metrics.
> > It has to open a lot of files to read them and there will definitely
> > be new allocations involved in those operations. For example reading
> > memory.stat does a page size allocation. Similarly, to perform action
> > the oom-killer may have to read cgroup.procs file which again has
> > allocation inside it.
>
> True but many of those can be avoided by opening the file early. At
> least seq_file based ones will not allocate later if the output size
> doesn't increase. Which should be the case for many. I think it is a
> general improvement to push those who allocate during read to an open
> time allocation.
>

I agree that this would be a general improvement but it is not always
possible (see below).

> > Regarding sophisticated oom policy, I can give one example of our
> > cluster level policy. For robustness, many user facing jobs run a lot
> > of instances in a cluster to handle failures. Such jobs are tolerant
> > to some amount of failures but they still have requirements to not let
> > the number of running instances below some threshold. Normally killing
> > such jobs is fine but we do want to make sure that we do not violate
> > their cluster level agreement. So, the userspace oom-killer may
> > dynamically need to confirm if such a job can be killed.
>
> What kind of data do you need to examine to make those decisions?
>

Most of the time the cluster level scheduler pushes the information to
the node controller which transfers that information to the
oom-killer. However based on the freshness of the information the
oom-killer might request to pull the latest information (IPC and RPC).

[...]
> >
> > I was thinking of simply prctl(SET_MEMPOOL, bytes) to assign mempool
> > to a thread (not shared between threads) and prctl(RESET_MEMPOOL) to
> > free the mempool.
>
> I am not a great fan of prctl. It has become a dumping ground for all
> mix of unrelated functionality. But let's say this is a minor detail at
> this stage.

I agree this does not have to be prctl().

> So you are proposing to have a per mm mem pool that would be

I was thinking of per-task_struct instead of per-mm_struct just for simplicity.

> used as a fallback for an allocation which cannot make a forward
> progress, right?

Correct

> Would that pool be preallocated and sitting idle?

Correct

> What kind of allocations would be allowed to use the pool?

I was thinking of any type of allocation from the oom-killer (or
specific threads). Please note that the mempool is the backup and only
used in the slowpath.

> What if the pool is depleted?

This would mean that either the estimate of mempool size is bad or
oom-killer is buggy and leaking memory.

I am open to any design directions for mempool or some other way where
we can provide a notion of memory guarantee to oom-killer.

thanks,
Shakeel

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ