[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20250814160914.7a4622ae1370092dde11c5f2@linux-foundation.org>
Date: Thu, 14 Aug 2025 16:09:14 -0700
From: Andrew Morton <akpm@...ux-foundation.org>
To: <zhongjinji@...or.com>
Cc: <linux-mm@...ck.org>, <mhocko@...e.com>, <rientjes@...gle.com>,
<shakeel.butt@...ux.dev>, <npache@...hat.com>,
<linux-kernel@...r.kernel.org>, <tglx@...utronix.de>, <mingo@...hat.com>,
<peterz@...radead.org>, <dvhart@...radead.org>, <dave@...olabs.net>,
<andrealmeid@...lia.com>, <liam.howlett@...cle.com>, <liulu.liu@...or.com>,
<feng.han@...or.com>
Subject: Re: [PATCH v4 3/3] mm/oom_kill: Have the OOM reaper and exit_mmap()
traverse the maple tree in opposite orders
On Thu, 14 Aug 2025 21:55:55 +0800 <zhongjinji@...or.com> wrote:
> When a process is OOM killed, if the OOM reaper and the thread running
> exit_mmap() execute at the same time, both will traverse the vma's maple
> tree along the same path. They may easily unmap the same vma, causing them
> to compete for the pte spinlock. This increases unnecessary load, causing
> the execution time of the OOM reaper and the thread running exit_mmap() to
> increase.
Please tell me what I'm missing here.
OOM kills are a rare event. And this race sounds like it will rarely
occur even if an oom-killing is happening. And the delay will be
relatively short.
If I'm correct then we're addressing rare*rare*small, so why bother?
> When a process exits, exit_mmap() traverses the vma's maple tree from low to high
> address. To reduce the chance of unmapping the same vma simultaneously,
> the OOM reaper should traverse vma's tree from high to low address. This reduces
> lock contention when unmapping the same vma.
Sharing some before-and-after runtime measurements would be useful. Or
at least, detailed anecdotes.
Powered by blists - more mailing lists