[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZWBuAvcrb20MmX7m@tiehlicka>
Date: Fri, 24 Nov 2023 10:33:54 +0100
From: Michal Hocko <mhocko@...e.com>
To: gaoxu <gaoxu2@...onor.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Suren Baghdasaryan <surenb@...gle.com>,
yipengxiang <yipengxiang@...onor.com>
Subject: Re: 回复: [PATCH] mm,oom_reaper: avoid run queue_oom_reaper if task is not oom
On Fri 24-11-23 02:52:34, gaoxu wrote:
> On Web, 22 Nov 2023 21:47:44 +0000 Andrew Morton wrote:
> > On Wed, 22 Nov 2023 12:46:44 +0000 gaoxu <gaoxu2@...onor.com> wrote:
>
> >> The function queue_oom_reaper tests and sets tsk->signal->oom_mm->flags.
> >> However, it is necessary to check if 'tsk' is an OOM victim before
> >> executing 'queue_oom_reaper' because the variable may be NULL.
> >>
> >> We encountered such an issue, and the log is as follows:
> >> [3701:11_see]Out of memory: Killed process 3154 (system_server)
> >> total-vm:23662044kB, anon-rss:0kB, file-rss:0kB, shmem-rss:0kB,
> >> UID:1000 pgtables:4056kB oom_score_adj:-900
> >> [3701:11_see][RB/E]rb_sreason_str_set: sreason_str set null_pointer
> >> [3701:11_see][RB/E]rb_sreason_str_set: sreason_str set unknown_addr
> >> [3701:11_see]Unable to handle kernel NULL pointer dereference at
> >> virtual address 0000000000000328
>
> > Well that isn't good. How frequently does this happen and can you suggest why some quite old code is suddenly causing problems?
> > What is your workload doing that others' do not do?
> This is a low probability issue. We conducted monkey testing for a month,
> and this problem occurred only once.
> The cause of the OOM error is the process surfaceflinger has encountered dma-buf memory leak.
>
> I have not found the root cause of this problem.
> The physical memory of the process killed by OOM has been released, indicating that the issue may have occurred due to a concurrency problem
> between process termination and OOM kill.
> oom kill log:
> Out of memory: Killed process 3154 (system_server) total-vm:23662044kB, anon-rss:0kB, file-rss:0kB, shmem-rss:0kB,
> UID:1000 pgtables:4056kB oom_score_adj:-900
>
> >> [3701:11_see]user pgtable: 4k pages, 39-bit VAs, pgdp=00000000821de000
> >> [3701:11_see][0000000000000328] pgd=0000000000000000,
> >> p4d=0000000000000000,pud=0000000000000000
> >> [3701:11_see]tracing off
> >> [3701:11_see]Internal error: Oops: 96000005 [#1] PREEMPT SMP
> >> [3701:11_see]Call trace:
> >> [3701:11_see] queue_oom_reaper+0x30/0x170 [3701:11_see]
> >> __oom_kill_process+0x590/0x860 [3701:11_see]
> >> oom_kill_process+0x140/0x274 [3701:11_see] out_of_memory+0x2f4/0x54c
> >> [3701:11_see] __alloc_pages_slowpath+0x5d8/0xaac
> >> [3701:11_see] __alloc_pages+0x774/0x800 [3701:11_see]
> >> wp_page_copy+0xc4/0x116c [3701:11_see] do_wp_page+0x4bc/0x6fc
> >> [3701:11_see] handle_pte_fault+0x98/0x2a8 [3701:11_see]
> >> __handle_mm_fault+0x368/0x700 [3701:11_see]
> >> do_handle_mm_fault+0x160/0x2cc [3701:11_see] do_page_fault+0x3e0/0x818
> >> [3701:11_see] do_mem_abort+0x68/0x17c [3701:11_see] el0_da+0x3c/0xa0
> >> [3701:11_see] el0t_64_sync_handler+0xc4/0xec [3701:11_see]
> >> el0t_64_sync+0x1b4/0x1b8 [3701:11_see]tracing off
> >>
> >> Signed-off-by: Gao Xu <gaoxu2@...onor.com>
>
> > I'll queue this for -stable backporting, assuming review is agreeable.
> > Can we please identify a suitable Fixes: target to tell -stable maintainers which kernels need the fix? It looks like this goes back a long way.
> The problem occurred on Linux version 5.15.78, There is no difference between the latest kernel version code and Linux version 5.15.78 in the
> Function __oom_kill_process, so this problem is likely common to both versions.
__oom_kill_process is not the only involved part. The exit path plays a
really huge role there as well. I do understand that this was one off
and likely hard to reproduce but without knowing that the current Linus
tree can trigger this, we cannot really do much, I am afraid.
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists