[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250331145957.GA2110528@cmpxchg.org>
Date: Mon, 31 Mar 2025 10:59:57 -0400
From: Johannes Weiner <hannes@...xchg.org>
To: Alexei Starovoitov <alexei.starovoitov@...il.com>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>,
bpf <bpf@...r.kernel.org>, Daniel Borkmann <daniel@...earbox.net>,
Andrii Nakryiko <andrii@...nel.org>,
Martin KaFai Lau <martin.lau@...nel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Peter Zijlstra <peterz@...radead.org>,
Vlastimil Babka <vbabka@...e.cz>,
Sebastian Sewior <bigeasy@...utronix.de>,
Steven Rostedt <rostedt@...dmis.org>,
Michal Hocko <mhocko@...e.com>,
Shakeel Butt <shakeel.butt@...ux.dev>,
linux-mm <linux-mm@...ck.org>, LKML <linux-kernel@...r.kernel.org>
Subject: Re: [GIT PULL] Introduce try_alloc_pages for 6.15
On Sun, Mar 30, 2025 at 02:30:15PM -0700, Alexei Starovoitov wrote:
> On Sun, Mar 30, 2025 at 1:42 PM Linus Torvalds
> <torvalds@...ux-foundation.org> wrote:
> >
> > On Thu, 27 Mar 2025 at 07:52, Alexei Starovoitov
> > <alexei.starovoitov@...il.com> wrote:
> > >
> > > The pull includes work from Sebastian, Vlastimil and myself
> > > with a lot of help from Michal and Shakeel.
> > > This is a first step towards making kmalloc reentrant to get rid
> > > of slab wrappers: bpf_mem_alloc, kretprobe's objpool, etc.
> > > These patches make page allocator safe from any context.
> >
> > So I've pulled this too, since it looked generally fine.
>
> Thanks!
>
> > The one reaction I had is that when you basically change
> >
> > spin_lock_irqsave(&zone->lock, flags);
> >
> > into
> >
> > if (!spin_trylock_irqsave(&zone->lock, flags)) {
> > if (unlikely(alloc_flags & ALLOC_TRYLOCK))
> > return NULL;
> > spin_lock_irqsave(&zone->lock, flags);
> > }
> >
> > we've seen bad cache behavior for this kind of pattern in other
> > situations: if the "try" fails, the subsequent "do the lock for real"
> > case now does the wrong thing, in that it will immediately try again
> > even if it's almost certainly just going to fail - causing extra write
> > cache accesses.
> >
> > So typically, in places that can see contention, it's better to either do
> >
> > (a) trylock followed by a slowpath that takes the fact that it was
> > locked into account and does a read-only loop until it sees otherwise
> >
> > This is, for example, what the mutex code does with that
> > __mutex_trylock() -> mutex_optimistic_spin() pattern, but our
> > spinlocks end up doing similar things (ie "trylock" followed by
> > "release irq and do the 'relax loop' thing).
>
> Right,
> __mutex_trylock(lock) -> mutex_optimistic_spin() pattern is
> equivalent to 'pending' bit spinning in qspinlock.
>
> > or
> >
> > (b) do the trylock and lock separately, ie
> >
> > if (unlikely(alloc_flags & ALLOC_TRYLOCK)) {
> > if (!spin_trylock_irqsave(&zone->lock, flags))
> > return NULL;
> > } else
> > spin_lock_irqsave(&zone->lock, flags);
> >
> > so that you don't end up doing two cache accesses for ownership that
> > can cause extra bouncing.
>
> Ok, I will switch to above.
>
> > I'm not sure this matters at all in the allocation path - contention
> > may simply not be enough of an issue, and the trylock is purely about
> > "unlikely NMI worries", but I do worry that you might have made the
> > normal case slower.
>
> We actually did see zone->lock being contended in production.
> Last time the culprit was an inadequate per-cpu caching and
> these series in 6.11 fixed it:
> https://lwn.net/Articles/947900/
> I don't think we've seen it contended in the newer kernels.
>
> Johannes, pls correct me if I'm wrong.
Contention should indeed be rare in practice. This has become a very
coarse lock, with nowadays hundreds of HW threads hitting still only
one or two zones. A lot rides on the fastpath per-cpu caches, and it
becomes noticable very quickly if those are sized inappropriately.
> But to avoid being finger pointed, I'll switch to checking alloc_flags
> first. It does seem a better trade off to avoid cache bouncing because
> of 2nd cmpxchg. Though when I wrote it this way I convinced myself and
> others that it's faster to do trylock first to avoid branch misprediction.
If you haven't yet, it could be interesting to check if/where branches
are generated at all, given the proximity and the heavy inlining
between where you pass the flag and where it's tested.
Powered by blists - more mailing lists