[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-id: <9B718E2A-FE3B-453E-9426-1E1880351765@apple.com>
Date: Sat, 03 Aug 2019 11:24:37 -0700
From: Masoud Sharbiani <msharbiani@...le.com>
To: Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>
Cc: Michal Hocko <mhocko@...nel.org>,
Greg KH <gregkh@...uxfoundation.org>, hannes@...xchg.org,
vdavydov.dev@...il.com, linux-mm@...ck.org,
cgroups@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: Possible mem cgroup bug in kernels between 4.18.0 and 5.3-rc1.
> On Aug 3, 2019, at 10:41 AM, Masoud Sharbiani <msharbiani@...le.com> wrote:
>
>
>
>> On Aug 3, 2019, at 8:51 AM, Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp> wrote:
>>
>> Masoud, will you try this patch?
>
> Gladly.
> It looks like it is working (and OOMing properly).
>
>
>>
>> By the way, is /sys/fs/cgroup/memory/leaker/memory.usage_in_bytes remains non-zero
>> despite /sys/fs/cgroup/memory/leaker/tasks became empty due to memcg OOM killer expected?
>> Deleting big-data-file.bin after memcg OOM killer reduces some, but still remains
>> non-zero.
>
> Yes. I had not noticed that:
>
> [ 1114.190477] oom_reaper: reaped process 1942 (leaker), now anon-rss:0kB, file-
> rss:0kB, shmem-rss:0kB
> ./test-script.sh: line 16: 1942 Killed ./leaker -p 10240 -c 100000
>
> [root@...alhost laleaker]# cat /sys/fs/cgroup/memory/leaker/memory.usage_in_bytes
> 3194880
> [root@...alhost laleaker]# cat /sys/fs/cgroup/memory/leaker/memory.limit_in_bytes
> 536870912
> [root@...alhost laleaker]# rm -f big-data-file.bin
> [root@...alhost laleaker]# cat /sys/fs/cgroup/memory/leaker/memory.usage_in_bytes
> 2838528
>
> Thanks!
> Masoud
>
> PS: Tried hand-back-porting it to 4.19-y and it didn’t work. I think there are other patches between 4.19.0 and 5.3 that could be necessary…
>
Please ignore this last part. It works on 4.19-y branch as well.
Masoud
>
>>
>> ----------------------------------------
>> From 2f92c70f390f42185c6e2abb8dda98b1b7d02fa9 Mon Sep 17 00:00:00 2001
>> From: Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>
>> Date: Sun, 4 Aug 2019 00:41:30 +0900
>> Subject: [PATCH] memcg, oom: don't require __GFP_FS when invoking memcg OOM killer
>>
>> Masoud Sharbiani noticed that commit 29ef680ae7c21110 ("memcg, oom: move
>> out_of_memory back to the charge path") broke memcg OOM called from
>> __xfs_filemap_fault() path. It turned out that try_chage() is retrying
>> forever without making forward progress because mem_cgroup_oom(GFP_NOFS)
>> cannot invoke the OOM killer due to commit 3da88fb3bacfaa33 ("mm, oom:
>> move GFP_NOFS check to out_of_memory"). Regarding memcg OOM, we need to
>> bypass GFP_NOFS check in order to guarantee forward progress.
>>
>> Signed-off-by: Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>
>> Reported-by: Masoud Sharbiani <msharbiani@...le.com>
>> Bisected-by: Masoud Sharbiani <msharbiani@...le.com>
>> Fixes: 29ef680ae7c21110 ("memcg, oom: move out_of_memory back to the charge path")
>> ---
>> mm/oom_kill.c | 5 +++--
>> 1 file changed, 3 insertions(+), 2 deletions(-)
>>
>> diff --git a/mm/oom_kill.c b/mm/oom_kill.c
>> index eda2e2a..26804ab 100644
>> --- a/mm/oom_kill.c
>> +++ b/mm/oom_kill.c
>> @@ -1068,9 +1068,10 @@ bool out_of_memory(struct oom_control *oc)
>> * The OOM killer does not compensate for IO-less reclaim.
>> * pagefault_out_of_memory lost its gfp context so we have to
>> * make sure exclude 0 mask - all other users should have at least
>> - * ___GFP_DIRECT_RECLAIM to get here.
>> + * ___GFP_DIRECT_RECLAIM to get here. But mem_cgroup_oom() has to
>> + * invoke the OOM killer even if it is a GFP_NOFS allocation.
>> */
>> - if (oc->gfp_mask && !(oc->gfp_mask & __GFP_FS))
>> + if (oc->gfp_mask && !(oc->gfp_mask & __GFP_FS) && !is_memcg_oom(oc))
>> return true;
>>
>> /*
>> --
>> 1.8.3.1
>>
>>
>
Powered by blists - more mailing lists