lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAHk-=wgRbk2ezu1TNewZQSrT1MCzP-xAXrcHXULMeW=RRSak5A@mail.gmail.com>
Date: Sun, 30 Mar 2025 13:42:33 -0700
From: Linus Torvalds <torvalds@...ux-foundation.org>
To: Alexei Starovoitov <alexei.starovoitov@...il.com>
Cc: bpf@...r.kernel.org, daniel@...earbox.net, andrii@...nel.org, 
	martin.lau@...nel.org, akpm@...ux-foundation.org, peterz@...radead.org, 
	vbabka@...e.cz, bigeasy@...utronix.de, rostedt@...dmis.org, mhocko@...e.com, 
	shakeel.butt@...ux.dev, linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [GIT PULL] Introduce try_alloc_pages for 6.15

On Thu, 27 Mar 2025 at 07:52, Alexei Starovoitov
<alexei.starovoitov@...il.com> wrote:
>
> The pull includes work from Sebastian, Vlastimil and myself
> with a lot of help from Michal and Shakeel.
> This is a first step towards making kmalloc reentrant to get rid
> of slab wrappers: bpf_mem_alloc, kretprobe's objpool, etc.
> These patches make page allocator safe from any context.

So I've pulled this too, since it looked generally fine.

The one reaction I had is that when you basically change

        spin_lock_irqsave(&zone->lock, flags);

into

        if (!spin_trylock_irqsave(&zone->lock, flags)) {
                if (unlikely(alloc_flags & ALLOC_TRYLOCK))
                        return NULL;
                spin_lock_irqsave(&zone->lock, flags);
        }

we've seen bad cache behavior for this kind of pattern in other
situations: if the "try" fails, the subsequent "do the lock for real"
case now does the wrong thing, in that it will immediately try again
even if it's almost certainly just going to fail - causing extra write
cache accesses.

So typically, in places that can see contention, it's better to either do

 (a) trylock followed by a slowpath that takes the fact that it was
locked into account and does a read-only loop until it sees otherwise

     This is, for example, what the mutex code does with that
__mutex_trylock() -> mutex_optimistic_spin() pattern, but our
spinlocks end up doing similar things (ie "trylock" followed by
"release irq and do the 'relax loop' thing).

or

 (b) do the trylock and lock separately, ie

        if (unlikely(alloc_flags & ALLOC_TRYLOCK)) {
                if (!spin_trylock_irqsave(&zone->lock, flags))
                        return NULL;
        } else
                spin_lock_irqsave(&zone->lock, flags);

so that you don't end up doing two cache accesses for ownership that
can cause extra bouncing.

I'm not sure this matters at all in the allocation path - contention
may simply not be enough of an issue, and the trylock is purely about
"unlikely NMI worries", but I do worry that you might have made the
normal case slower.

It's easily fixable later if it ends up being the case, so I don't
worry too much about it, but I did want to mention it since going
through the code made me react to it.

                Linus

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ