[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20250825141224.2108-1-zhongjinji@honor.com>
Date: Mon, 25 Aug 2025 22:12:24 +0800
From: zhongjinji <zhongjinji@...or.com>
To: <lorenzo.stoakes@...cle.com>
CC: <akpm@...ux-foundation.org>, <andrealmeid@...lia.com>,
<dave@...olabs.net>, <dvhart@...radead.org>, <feng.han@...or.com>,
<liam.howlett@...cle.com>, <linux-kernel@...r.kernel.org>,
<linux-mm@...ck.org>, <liulu.liu@...or.com>, <mhocko@...e.com>,
<mingo@...hat.com>, <npache@...hat.com>, <peterz@...radead.org>,
<rientjes@...gle.com>, <shakeel.butt@...ux.dev>, <tglx@...utronix.de>,
<zhongjinji@...or.com>
Subject: Re: [PATCH v4 3/3] mm/oom_kill: Have the OOM reaper and exit_mmap() traverse the maple tree in opposite orders
> >
> > |--99.74%-- oom_reaper
> > | |--76.67%-- unmap_page_range
> > | | |--33.70%-- __pte_offset_map_lock
> > | | | |--98.46%-- _raw_spin_lock
> > | | |--27.61%-- free_swap_and_cache_nr
> > | | |--16.40%-- folio_remove_rmap_ptes
> > | | |--12.25%-- tlb_flush_mmu
> > | |--12.61%-- tlb_finish_mmu
> >
> >
> > |--98.84%-- oom_reaper
> > | |--53.45%-- unmap_page_range
> > | | |--24.29%-- [hit in function]
> > | | |--48.06%-- folio_remove_rmap_ptes
> > | | |--17.99%-- tlb_flush_mmu
> > | | |--1.72%-- __pte_offset_map_lock
> > | |
> > | |--30.43%-- tlb_finish_mmu
>
> Right yes thanks for providing this.
>
> I'm still not convinced by this approach however, it feels like you're papering
> over a crack for a problematic hack that needs to be solved at a different
> level.
>
> It feels like the whole waiting around thing is a hack to paper over something
> and then we're introducing another hack to make that work in a specific
> scenario.
>
> I also am not clear (perhaps you answered it elsewhere) how you're encountering
> this at a scale for it to be a meaningful issue?
On low-memory Android devices, high memory pressure often requires killing
processes to free memory, which is generally accepted on Android. When
killing a process on Android, there is also an asynchronous process reap
mechanism, which is implemented through process_mrelease, similar to the
oom reaper. OOM events are also not rare. Therefore, it makes sense to
reduce the load on the reaper.
> Also not sure we should be changing core mm to support perf issues with using an
> effectively-deprecated interface (cgroup v1)?
Yeah, it is not that appealing.
Powered by blists - more mailing lists