[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6319a3df-b19c-4e35-a46c-c4a5ea003a6b@huaweicloud.com>
Date: Mon, 23 Dec 2024 10:23:29 +0800
From: Chen Ridong <chenridong@...weicloud.com>
To: Michal Hocko <mhocko@...e.com>, Andrew Morton <akpm@...ux-foundation.org>
Cc: hannes@...xchg.org, yosryahmed@...gle.com, roman.gushchin@...ux.dev,
shakeel.butt@...ux.dev, muchun.song@...ux.dev, davidf@...eo.com,
vbabka@...e.cz, handai.szj@...bao.com, rientjes@...gle.com,
kamezawa.hiroyu@...fujitsu.com, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, cgroups@...r.kernel.org,
chenridong@...wei.com, wangweiyang2@...wei.com
Subject: Re: [PATCH v2] memcg: fix soft lockup in the OOM process
On 2024/12/21 15:28, Michal Hocko wrote:
> On Fri 20-12-24 14:47:34, Andrew Morton wrote:
>> On Fri, 20 Dec 2024 10:31:23 +0000 Chen Ridong <chenridong@...weicloud.com> wrote:
>>
>>> From: Chen Ridong <chenridong@...wei.com>
>>>
>>> A soft lockup issue was found in the product with about 56,000 tasks were
>>> in the OOM cgroup, it was traversing them when the soft lockup was
>>> triggered.
>>>
>>> ...
>>>
>>> This is because thousands of processes are in the OOM cgroup, it takes a
>>> long time to traverse all of them. As a result, this lead to soft lockup
>>> in the OOM process.
>>>
>>> To fix this issue, call 'cond_resched' in the 'mem_cgroup_scan_tasks'
>>> function per 1000 iterations. For global OOM, call
>>> 'touch_softlockup_watchdog' per 1000 iterations to avoid this issue.
>>>
>>> ...
>>>
>>> --- a/include/linux/oom.h
>>> +++ b/include/linux/oom.h
>>> @@ -14,6 +14,13 @@ struct notifier_block;
>>> struct mem_cgroup;
>>> struct task_struct;
>>>
>>> +/* When it traverses for long time, to prevent softlockup, call
>>> + * cond_resched/touch_softlockup_watchdog very 1000 iterations.
>>> + * The 1000 value is not exactly right, it's used to mitigate the overhead
>>> + * of cond_resched/touch_softlockup_watchdog.
>>> + */
>>> +#define SOFTLOCKUP_PREVENTION_LIMIT 1000
>>
>> If this is to have potentially kernel-wide scope, its name should
>> identify which subsystem it belongs to. Maybe OOM_KILL_RESCHED or
>> something.
>>
>> But I'm not sure that this really needs to exist. Are the two usage
>> sites particularly related?
>
> Yes, I do not think this needs to pretend to be a more generic mechanism
> to prevent soft lockups. The number of iterations highly depends on the
> operation itself.
>
Thanks, I will update.
>>
>>> enum oom_constraint {
>>> CONSTRAINT_NONE,
>>> CONSTRAINT_CPUSET,
>>> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
>>> index 5c373d275e7a..f4c12d6e7b37 100644
>>> --- a/mm/memcontrol.c
>>> +++ b/mm/memcontrol.c
>>> @@ -1161,6 +1161,7 @@ void mem_cgroup_scan_tasks(struct mem_cgroup *memcg,
>>> {
>>> struct mem_cgroup *iter;
>>> int ret = 0;
>>> + int i = 0;
>>>
>>> BUG_ON(mem_cgroup_is_root(memcg));
>>>
>>> @@ -1169,8 +1170,11 @@ void mem_cgroup_scan_tasks(struct mem_cgroup *memcg,
>>> struct task_struct *task;
>>>
>>> css_task_iter_start(&iter->css, CSS_TASK_ITER_PROCS, &it);
>>> - while (!ret && (task = css_task_iter_next(&it)))
>>> + while (!ret && (task = css_task_iter_next(&it))) {
>>> ret = fn(task, arg);
>>> + if (++i % SOFTLOCKUP_PREVENTION_LIMIT)
>>
>> And a modulus operation is somewhat expensive.
>
> This is a cold path used during OOM. While we can make it more optimal I
> doubt it matters in practice so we should aim at readbility. I do not
> mind either way, I just wanted to note that this is not performance
> sensitive.
>
I think '(++i & 1023)' is much better, I will update.
Thank you all gays.
Best regards
Ridong
>>
>> Perhaps a simple
>>
>> /* Avoid potential softlockup warning */
>> if ((++i & 1023) == 0)
>>
>> at both sites will suffice. Opinions might vary...
>>
>
Powered by blists - more mailing lists