lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <e0647e65-d348-35a0-bc5a-66d1ad02ea53@suse.cz>
Date:   Wed, 29 Nov 2023 14:25:32 +0100
From:   Vlastimil Babka <vbabka@...e.cz>
To:     Hyeonggon Yoo <42.hyeyoo@...il.com>
Cc:     "Liam R. Howlett" <Liam.Howlett@...cle.com>,
        Matthew Wilcox <willy@...radead.org>,
        Suren Baghdasaryan <surenb@...gle.com>,
        Christoph Lameter <cl@...ux.com>,
        David Rientjes <rientjes@...gle.com>,
        Pekka Enberg <penberg@...nel.org>,
        Joonsoo Kim <iamjoonsoo.kim@....com>,
        Roman Gushchin <roman.gushchin@...ux.dev>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org, patches@...ts.linux.dev
Subject: Re: [RFC v2 2/7] mm, slub: add opt-in slub_percpu_array

On 11/29/23 01:46, Hyeonggon Yoo wrote:
> On Wed, Nov 29, 2023 at 2:37 AM Vlastimil Babka <vbabka@...e.cz> wrote:
> 
>> >> @@ -4060,6 +4201,45 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size,
>> >>  }
>> >>  EXPORT_SYMBOL(kmem_cache_alloc_bulk);
>> >>
>> >> +int kmem_cache_prefill_percpu_array(struct kmem_cache *s, unsigned int count,
>> >> +               gfp_t gfp)
>> >> +{
>> >> +       struct slub_percpu_array *pca;
>> >> +       void *objects[32];
>> >> +       unsigned int used;
>> >> +       unsigned int allocated;
>> >> +
>> >> +       if (!s->cpu_array)
>> >> +               return -EINVAL;
>> >> +
>> >> +       /* racy but we don't care */
>> >> +       pca = raw_cpu_ptr(s->cpu_array);
>> >> +
>> >> +       used = READ_ONCE(pca->used);
>> >
>> > Hmm for the prefill to be meaningful,
>> > remote allocation should be possible, right?
>>
>> Remote in what sense?
> 
> TL;DR) What I wanted to ask was:
> "How pre-filling a number of objects works when the pre-filled objects
> are not shared between CPUs"
> 
> IIUC the prefill is opportunistically filling the array so (hopefully)
> expecting there are
> some objects filled in it.

Yes.

> Let's say CPU X calls kmem_cache_prefill_percpu_array(32) and all 32
> objects are filled into CPU X's array.
> But if CPU Y can't allocate from CPU X's array (which I referred to as
> "remote allocation"), the semantics differ from
> the maple tree's perspective because preallocated objects were shared
> between CPUs before, but now it's not?

The assumption is that the operation will prefill on CPU X and then consume
it also on X, because shortly after prefill it will enter some restricted
context (i.e. spin_lock_irqsave or whatnot) that prevents it from migrating.
That's not guaranteed of course, but migration in a bad moment and
subsequent depleted array should be rare enough that we'll just handle it in
the slow paths, and if it results in dipping into reserves, it won't be too
disruptive.

> Thanks!
> 
> --
> Hyeonggon

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ