lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sun, 4 Jun 2017 15:50:37 -0700 (PDT)
From:   David Rientjes <>
To:     Roman Gushchin <>
cc:, Tejun Heo <>,
        Johannes Weiner <>,
        Li Zefan <>,
        Michal Hocko <>,
        Vladimir Davydov <>,
        Tetsuo Handa <>,,,,
Subject: Re: [RFC PATCH v2 1/7] mm, oom: refactor select_bad_process() to
 take memcg as an argument

We use a heavily modified system and memcg oom killer and I'm wondering
if there is some opportunity for collaboration because we may have some
shared goals.

I can summarize how we currently use the oom killer at a high level so
that it is not overwhelming with implementation details and give some
rationale for why we have converged onto this strategy over the period of
a few years.

For victim selection, we have strict priority based oom killing both at
the memcg level and the process level.

Each process has its own "badness" value that is independent of
oom_score_adj, although some conversion is done if a third-party thread
chooses to disable itself from oom killing for backwards compatibility.
Lower values are more preferable victims, but that detail should not
matter significantly.  If two processes share the same badness value,
tiebreaks are done by selecting the largest rss.

Each memcg in a hierarchy also has its own badness value which
semantically means the same as the per-process value, although it
considers the entire memcg as a unit, similar to your approach, when
iterating the hierarchy to choose a process.  The benefit of the
per-memcg and per-process approach is that you can kill the lowest
priority process from the lowest priority memcg.

The above scoring is enabled with a VM sysctl for the system and is used
for both system (global) and memcg oom kills.  For system overcommit,
this means we can kill the lowest priority job on the system; for memcg,
we can allow users to define their oom kill priorities at each level of
their own hierarchy.

When the system or root of an oom memcg hierarchy encounters its limit,
we iterate each level of the memcg hierarchy to find the lowest priority
job.  This is done by comparing the badness of the sibling memcgs at
each level, finding the lowest, and iterating that subtree.  If there are
lower priority processes per the per-process badness value compared to
all sibling memcgs, that process is killed.

We also have complete userspace oom handling support.  This complements
the existing memory.oom_control notification when a memcg is oom with a
separate notifier that notifies when the kernel has oom killed a process.
It is possible to delay the oom killer from killing a process for memcg
oom kills with a configurable, per-memcg, oom delay.  If set, the kernel
will wait for userspace to respond to its oom notification and effect its
own policy decisions until memory is uncharged to that memcg hierarchy,
or the oom delay expires.  If the oom delay expires, the kernel oom
killer kills a process based on badness.

Our oom kill notification file used to get an fd to register with
cgroup.event_control also provides oom kill statistics based on system,
memcg, local, hierarchical, and user-induced oom kills when read().

We also have a convenient "kill all" knob that userspace can write when
handling oom conditions to iterate all threads attached to a particular
memcg and kill them.  This is merely to prevent races where userspace
does the oom killing itself, which is not problematic in itself, but
additional tasks continue to be attached to an oom memcg.

A caveat here is that we also support fully inclusive kmem accounting to
memcg hierarchies, so we call the oom killer as part of the memcg charge
path rather than only upon returning from fault with VM_FAULT_OOM.  We
have our own oom killer livelock detection that isn't necessarily
important in this thread, but we haven't encountered a situation where we
livelock by calling the oom killer during charge, and this is a
requirement for memcg charging as part of slab allocation.

I could post many patches to implement all of this functionality that we
have used for a few years, but I first wanted to send this email to see
if there is any common ground or to better understand your methodology
for using the kernel oom killer for both system and memcg oom kills.

Otherwise, very interesting stuff!

Powered by blists - more mailing lists