lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 16 May 2022 18:44:58 +0200
From:   Michal Hocko <mhocko@...e.com>
To:     Gang Li <ligang.bdlg@...edance.com>
Cc:     akpm@...ux-foundation.org, songmuchun@...edance.com,
        hca@...ux.ibm.com, gor@...ux.ibm.com, agordeev@...ux.ibm.com,
        borntraeger@...ux.ibm.com, svens@...ux.ibm.com,
        ebiederm@...ssion.com, keescook@...omium.org,
        viro@...iv.linux.org.uk, rostedt@...dmis.org, mingo@...hat.com,
        peterz@...radead.org, acme@...nel.org, mark.rutland@....com,
        alexander.shishkin@...ux.intel.com, jolsa@...nel.org,
        namhyung@...nel.org, david@...hat.com, imbrenda@...ux.ibm.com,
        apopple@...dia.com, adobriyan@...il.com,
        stephen.s.brennan@...cle.com, ohoono.kwon@...sung.com,
        haolee.swjtu@...il.com, kaleshsingh@...gle.com,
        zhengqi.arch@...edance.com, peterx@...hat.com, shy828301@...il.com,
        surenb@...gle.com, ccross@...gle.com, vincent.whitchurch@...s.com,
        tglx@...utronix.de, bigeasy@...utronix.de, fenghua.yu@...el.com,
        linux-s390@...r.kernel.org, linux-kernel@...r.kernel.org,
        linux-mm@...ck.org, linux-fsdevel@...r.kernel.org,
        linux-perf-users@...r.kernel.org
Subject: Re: [PATCH 0/5 v1] mm, oom: Introduce per numa node oom for
 CONSTRAINT_MEMORY_POLICY

On Thu 12-05-22 12:46:29, Gang Li wrote:
> TLDR:
> If a mempolicy is in effect(oc->constraint == CONSTRAINT_MEMORY_POLICY), out_of_memory() will
> select victim on specific node to kill. So that kernel can avoid accidental killing on NUMA system.
> 
> Problem:
> Before this patch series, oom will only kill the process with the highest memory usage.
> by selecting process with the highest oom_badness on the entire system to kill.
> 
> This works fine on UMA system, but may have some accidental killing on NUMA system.
> 
> As shown below, if process c.out is bind to Node1 and keep allocating pages from Node1,
> a.out will be killed first. But killing a.out did't free any mem on Node1, so c.out
> will be killed then.
> 
> A lot of our AMD machines have 8 numa nodes. In these systems, there is a greater chance
> of triggering this problem.

Sorry, I have only now found this email thread. The limitation of the
NUMA constrained oom is well known and long standing. Basically the
whole thing is a best effort as we are lacking per numa node memory
stats. I can see that you are trying to fill up that gap but this is
not really free. Have you measured the runtime overhead? Accounting is
done in a very performance sensitive paths and it would be rather
unfortunate to make everybody pay the overhead while binding to a
specific node or sets of nodes is not the most common usecase.

Also have you tried to have a look at cpusets? Those should be easier to
make a proper selection as it should be possible to iterate over tasks
belonging to a specific cpuset much more easier - essentialy something
similar to memcg oom killer. We do not do that right now and by a very
brief look at the CONSTRAINT_CPUSET it seems that this code is not
really doing much these days. Maybe that would be a more appropriate way
to deal with more precise node aware oom killing?

[...]
>  21 files changed, 317 insertions(+), 111 deletions(-)

The code footprint is not free either. And more importantnly does this
even work much more reliably? I can see quite some NUMA_NO_NODE
accounting (e.g. copy_pte_range!).Is this somehow fixable?

Also how do those numbers add up. Let's say you increase the counter as
NUMA_NO_NODE but later on during the clean up you decrease based on the
page node?

Last but not least I am really not following MM_NO_TYPE concept. I can
only see add_mm_counter users without any decrements. What is going on
there?
-- 
Michal Hocko
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ