lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 12 Jul 2022 19:12:18 +0800
From:   Abel Wu <wuyun.abel@...edance.com>
To:     Michal Hocko <mhocko@...e.com>, Gang Li <ligang.bdlg@...edance.com>
Cc:     akpm@...ux-foundation.org, surenb@...gle.com, hca@...ux.ibm.com,
        gor@...ux.ibm.com, agordeev@...ux.ibm.com,
        borntraeger@...ux.ibm.com, svens@...ux.ibm.com,
        viro@...iv.linux.org.uk, ebiederm@...ssion.com,
        keescook@...omium.org, rostedt@...dmis.org, mingo@...hat.com,
        peterz@...radead.org, acme@...nel.org, mark.rutland@....com,
        alexander.shishkin@...ux.intel.com, jolsa@...nel.org,
        namhyung@...nel.org, david@...hat.com, imbrenda@...ux.ibm.com,
        adobriyan@...il.com, yang.yang29@....com.cn, brauner@...nel.org,
        stephen.s.brennan@...cle.com, zhengqi.arch@...edance.com,
        haolee.swjtu@...il.com, xu.xin16@....com.cn,
        Liam.Howlett@...cle.com, ohoono.kwon@...sung.com,
        peterx@...hat.com, arnd@...db.de, shy828301@...il.com,
        alex.sierra@....com, xianting.tian@...ux.alibaba.com,
        willy@...radead.org, ccross@...gle.com, vbabka@...e.cz,
        sujiaxun@...ontech.com, sfr@...b.auug.org.au,
        vasily.averin@...ux.dev, mgorman@...e.de, vvghjk1234@...il.com,
        tglx@...utronix.de, luto@...nel.org, bigeasy@...utronix.de,
        fenghua.yu@...el.com, linux-s390@...r.kernel.org,
        linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org,
        linux-mm@...ck.org, linux-perf-users@...r.kernel.org,
        hezhongkun.hzk@...edance.com
Subject: Re: [PATCH v2 0/5] mm, oom: Introduce per numa node oom for
 CONSTRAINT_{MEMORY_POLICY,CPUSET}

Hi Michal,

On 7/8/22 4:54 PM, Michal Hocko Wrote:
> On Fri 08-07-22 16:21:24, Gang Li wrote:
>> TLDR
>> ----
>> If a mempolicy or cpuset is in effect, out_of_memory() will select victim
>> on specific node to kill. So that kernel can avoid accidental killing on
>> NUMA system.
> 
> We have discussed this in your previous posting and an alternative
> proposal was to use cpusets to partition NUMA aware workloads and
> enhance the oom killer to be cpuset aware instead which should be a much
> easier solution.
> 
>> Problem
>> -------
>> Before this patch series, oom will only kill the process with the highest
>> memory usage by selecting process with the highest oom_badness on the
>> entire system.
>>
>> This works fine on UMA system, but may have some accidental killing on NUMA
>> system.
>>
>> As shown below, if process c.out is bind to Node1 and keep allocating pages
>> from Node1, a.out will be killed first. But killing a.out did't free any
>> mem on Node1, so c.out will be killed then.
>>
>> A lot of AMD machines have 8 numa nodes. In these systems, there is a
>> greater chance of triggering this problem.
> 
> Please be more specific about existing usecases which suffer from the
> current OOM handling limitations.

I was just going through the mail list and happen to see this. There
is another usecase for us about per-numa memory usage.

Say we have several important latency-critical services sitting inside
different NUMA nodes without intersection. The need for memory of these
LC services varies, so the free memory of each node is also different.
Then we launch several background containers without cpuset constrains
to eat the left resources. Now the problem is that there doesn't seem
like a proper memory policy available to balance the usage between the
nodes, which could lead to memory-heavy LC services suffer from high
memory pressure and fails to meet the SLOs.

It's quite appreciated if you can shed some light on this!

Thanks & BR,
Abel

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ