lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CACRpkdbNxKh7ySjffhzCncgBroOOeOQP689k7dgBKgV9annLpg@mail.gmail.com>
Date: Tue, 18 Nov 2025 22:15:04 +0100
From: Linus Walleij <linus.walleij@...aro.org>
To: Mateusz Guzik <mjguzik@...il.com>
Cc: akpm@...ux-foundation.org, linux-mm@...ck.org, 
	linux-kernel@...r.kernel.org, pasha.tatashin@...een.com, 
	Liam.Howlett@...cle.com, lorenzo.stoakes@...cle.com
Subject: Re: [PATCH] fork: stop ignoring NUMA while handling cached thread stacks

Hi Mateusz,

excellent initiative!

I had this on some TODO-list, really nice to see that you
picked it up.

The patch looks solid just some questions:

On Mon, Nov 17, 2025 at 3:08 PM Mateusz Guzik <mjguzik@...il.com> wrote:

> Note the current caching is already bad as the cache keeps overflowing
> and a different solution is needed for the long run, to be worked
> out(tm).

That isn't very strange since we just have 2 stacks in the cache.

The best I can think of is to scale the number of cached stacks to
a function of free physical memory and process fork rate, if we have
much memory (for some definition of) and we are forking a lot we
should keep some more stacks around, if the forkrate goes down
or we are low on memory compared to the stack size we should
dynamically scale down the stack cache size. (OTOMH)

> +static struct vm_struct *alloc_thread_stack_node_from_cache(struct task_struct *tsk, int node)
> +{
> +       struct vm_struct *vm_area;
> +       unsigned int i;
> +
> +       /*
> +        * If the node has memory, we are guaranteed the stacks are backed by local pages.
> +        * Otherwise the pages are arbitrary.
> +        *
> +        * Note that depending on cpuset it is possible we will get migrated to a different
> +        * node immediately after allocating here, so this does *not* guarantee locality for
> +        * arbitrary callers.
> +        */
> +       scoped_guard(preempt) {
> +               if (node != NUMA_NO_NODE && numa_node_id() != node)
> +                       return NULL;
> +
> +               for (i = 0; i < NR_CACHED_STACKS; i++) {
> +                       vm_area = this_cpu_xchg(cached_stacks[i], NULL);
> +                       if (vm_area)
> +                               return vm_area;

So we check each stack slot in order to see if we can find one which isn't
NULL, and we can use this_cpu_xchg() because nothing can contest
this here as we are under the preempt guard, so we will get a !NULL
vm_area then we know we are good, right?

>  static bool try_release_thread_stack_to_cache(struct vm_struct *vm_area)
>  {
>         unsigned int i;
> +       int nid;
> +
> +       scoped_guard(preempt) {
> +               nid = numa_node_id();
> +               if (node_state(nid, N_MEMORY)) {
> +                       for (i = 0; i < vm_area->nr_pages; i++) {
> +                               struct page *page = vm_area->pages[i];
> +                               if (page_to_nid(page) != nid)
> +                                       return false;
> +                       }
> +               }

I would maybe add a comment saying:

"if we have node-local memory, don't even bother to cache a stack
if any page of it isn't on the same node, we only want clean local
node stacks"

(I guess that is the semantic you wanted.)

>
> -       for (i = 0; i < NR_CACHED_STACKS; i++) {
> -               struct vm_struct *tmp = NULL;
> +               for (i = 0; i < NR_CACHED_STACKS; i++) {
> +                       struct vm_struct *tmp = NULL;
>
> -               if (this_cpu_try_cmpxchg(cached_stacks[i], &tmp, vm_area))
> -                       return true;
> +                       if (this_cpu_try_cmpxchg(cached_stacks[i], &tmp, vm_area))
> +                               return true;

So since this now is under the preemption guard, this will always
succeed, right? I understand that using this_cpu_try_cmpxchg() is
the idiom, but just asking so I don't miss something else
possibly contesting the stacks here.

If the code should have the same style as alloc_thread_stack_node_from_cache()
I suppose it should be:

for (i = 0; i < NR_CACHED_STACKS; i++) {
        struct vm_struct *tmp = NULL;
        if (!this_cpu_cmpxchg(cached_stacks[i], &tmp, vm_area))
                return true;

Since if it managed to exchange the old value NULL for
the value of vm_area then it is returning NULL on success.

If I understood correctly +/- the above code style change:
Reviewed-by: Linus Walleij <linus.walleij@...aro.org>

Yours,
Linus Walleij

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ