[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <nwh7gegmvoisbxlsfwslobpbqku376uxdj2z32owkbftvozt3x@4dfet73fh2yy>
Date: Tue, 26 Aug 2025 09:37:22 -0400
From: "Liam R. Howlett" <Liam.Howlett@...cle.com>
To: Lorenzo Stoakes <lorenzo.stoakes@...cle.com>
Cc: zhongjinji <zhongjinji@...or.com>, mhocko@...e.com, rientjes@...gle.com,
shakeel.butt@...ux.dev, akpm@...ux-foundation.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, tglx@...utronix.de, liulu.liu@...or.com,
feng.han@...or.com
Subject: Re: [PATCH v5 2/2] mm/oom_kill: Have the OOM reaper and exit_mmap()
traverse the maple tree in opposite order
* Lorenzo Stoakes <lorenzo.stoakes@...cle.com> [250826 08:53]:
> On Mon, Aug 25, 2025 at 09:38:55PM +0800, zhongjinji wrote:
> > When a process is OOM killed without reaper delay, the oom reaper and the
> > exit_mmap() thread likely run simultaneously. They traverse the vma's maple
> > tree along the same path and may easily unmap the same vma, causing them to
> > compete for the pte spinlock.
> >
> > When a process exits, exit_mmap() traverses the vma's maple tree from low
> > to high addresses. To reduce the chance of unmapping the same vma
> > simultaneously, the OOM reaper should traverse the vma's tree from high to
> > low address.
> >
> > Signed-off-by: zhongjinji <zhongjinji@...or.com>
>
> I will leave it to Liam to confirm the maple tree bit is ok, but I guess
> I'm softening to the idea of doing this - because it should have no impact
> on most users, so even if it's some rare edge case that triggers the
> situation, then it's worth doing it in reverse just to help you guys out :)
>
I really don't think this is worth doing. We're avoiding a race between
oom and a task unmap - the MMF bits should be used to avoid this race -
or at least mitigate it.
They are probably both under the read lock, but considering how rare it
would be, would a racy flag check be enough - it is hardly critical to
get right. Either would reduce the probability.
> Liam - please confirm this is good from your side, and then I can add a tag!
>
> Cheers, Lorenzo
>
> > ---
> > mm/oom_kill.c | 9 +++++++--
> > 1 file changed, 7 insertions(+), 2 deletions(-)
> >
> > diff --git a/mm/oom_kill.c b/mm/oom_kill.c
> > index 4b4d73b1e00d..a0650da9ec9c 100644
> > --- a/mm/oom_kill.c
> > +++ b/mm/oom_kill.c
> > @@ -516,7 +516,7 @@ static bool __oom_reap_task_mm(struct mm_struct *mm)
> > {
> > struct vm_area_struct *vma;
> > bool ret = true;
> > - VMA_ITERATOR(vmi, mm, 0);
> > + MA_STATE(mas, &mm->mm_mt, ULONG_MAX, 0);
^^^^^^^^^ ^^
You have set the index larger than the last. It (probably?) works, but
isn't correct and may stop working, so let's fix it.
MA_STATE(mas, &mm->mm_mt, ULONG_MAX, ULONG_MAX);
> >
> > /*
> > * Tell all users of get_user/copy_from_user etc... that the content
> > @@ -526,7 +526,12 @@ static bool __oom_reap_task_mm(struct mm_struct *mm)
> > */
> > set_bit(MMF_UNSTABLE, &mm->flags);
> >
> > - for_each_vma(vmi, vma) {
> > + /*
> > + * When two tasks unmap the same vma at the same time, they may contend for the
> > + * pte spinlock. To reduce the probability of them unmapping the same vma, the
> > + * oom reaper traverse the vma maple tree in reverse order.
> > + */
> > + while ((vma = mas_find_rev(&mas, 0)) != NULL) {
>
> It's a pity there isn't a nicer formulation of this but this is probably
> the least worst way of doing it.
>
mas_for_each_rev() exists for this use case.
You will find that the implementation is very close to what you see
here. :)
> > if (vma->vm_flags & (VM_HUGETLB|VM_PFNMAP))
> > continue;
> >
> > --
> > 2.17.1
> >
Powered by blists - more mailing lists