lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJuCfpFe3SeY1EX=X4+wAm33Z3a0d_SoynK-86s5JWjsK80t_A@mail.gmail.com>
Date: Tue, 6 May 2025 10:32:43 -0700
From: Suren Baghdasaryan <surenb@...gle.com>
To: Vlastimil Babka <vbabka@...e.cz>
Cc: "Christoph Lameter (Ampere)" <cl@...two.org>, "Liam R. Howlett" <Liam.Howlett@...cle.com>, 
	David Rientjes <rientjes@...gle.com>, Roman Gushchin <roman.gushchin@...ux.dev>, 
	Harry Yoo <harry.yoo@...cle.com>, Uladzislau Rezki <urezki@...il.com>, linux-mm@...ck.org, 
	linux-kernel@...r.kernel.org, rcu@...r.kernel.org, 
	maple-tree@...ts.infradead.org
Subject: Re: [PATCH v4 1/9] slab: add opt-in caching layer of percpu sheaves

On Mon, Apr 28, 2025 at 12:01 AM Vlastimil Babka <vbabka@...e.cz> wrote:
>
> On 4/25/25 19:31, Christoph Lameter (Ampere) wrote:
> > On Fri, 25 Apr 2025, Vlastimil Babka wrote:
> >
> >> @@ -4195,7 +4793,11 @@ static __fastpath_inline void *slab_alloc_node(struct kmem_cache *s, struct list
> >>      if (unlikely(object))
> >>              goto out;
> >>
> >> -    object = __slab_alloc_node(s, gfpflags, node, addr, orig_size);
> >> +    if (s->cpu_sheaves && node == NUMA_NO_NODE)
> >> +            object = alloc_from_pcs(s, gfpflags);
> >
> > The node to use is determined in __slab_alloc_node() only based on the
> > memory policy etc. NUMA_NO_NODE allocations can be redirected by memory
> > policies and this check disables it.
>
> To handle that, alloc_from_pcs() contains this:
>
> #ifdef CONFIG_NUMA
>         if (static_branch_unlikely(&strict_numa)) {
>                 if (current->mempolicy)
>                         return NULL;
>         }
> #endif
>
> And so there will be a fallback. It doesn't (currently) try to evaluate if
> the local node is compatible as this is before taking the local lock (and
> thus preventing migration).
>
>
> >> @@ -4653,7 +5483,10 @@ void slab_free(struct kmem_cache *s, struct slab *slab, void *object,
> >>      memcg_slab_free_hook(s, slab, &object, 1);
> >>      alloc_tagging_slab_free_hook(s, slab, &object, 1);
> >>
> >> -    if (likely(slab_free_hook(s, object, slab_want_init_on_free(s), false)))
> >> +    if (unlikely(!slab_free_hook(s, object, slab_want_init_on_free(s), false)))
> >> +            return;
> >> +
> >> +    if (!s->cpu_sheaves || !free_to_pcs(s, object))
> >>              do_slab_free(s, slab, object, object, 1, addr);
> >>  }
> >
> > We free to pcs even if the object is remote?

Overall the patch LGTM but I would like to hear the answer to this
question too, please.

> >
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ