[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7cd33b23-64f5-a736-ea69-b29e40d42e78@suse.cz>
Date: Mon, 4 Dec 2023 17:58:08 +0100
From: Vlastimil Babka <vbabka@...e.cz>
To: Chengming Zhou <chengming.zhou@...ux.dev>,
Hyeonggon Yoo <42.hyeyoo@...il.com>
Cc: cl@...ux.com, penberg@...nel.org, rientjes@...gle.com,
iamjoonsoo.kim@....com, akpm@...ux-foundation.org,
roman.gushchin@...ux.dev, linux-mm@...ck.org,
linux-kernel@...r.kernel.org,
Chengming Zhou <zhouchengming@...edance.com>
Subject: Re: [PATCH v5 6/9] slub: Delay freezing of partial slabs
On 12/3/23 11:15, Chengming Zhou wrote:
> On 2023/12/3 14:53, Hyeonggon Yoo wrote:
>> On Thu, Nov 2, 2023 at 12:25 PM <chengming.zhou@...ux.dev> wrote:
>>>
>>> From: Chengming Zhou <zhouchengming@...edance.com>
>>>
>>> Now we will freeze slabs when moving them out of node partial list to
>>> cpu partial list, this method needs two cmpxchg_double operations:
>>>
>>> 1. freeze slab (acquire_slab()) under the node list_lock
>>> 2. get_freelist() when pick used in ___slab_alloc()
>>>
>>> Actually we don't need to freeze when moving slabs out of node partial
>>> list, we can delay freezing to when use slab freelist in ___slab_alloc(),
>>> so we can save one cmpxchg_double().
>>>
>>> And there are other good points:
>>> - The moving of slabs between node partial list and cpu partial list
>>> becomes simpler, since we don't need to freeze or unfreeze at all.
>>>
>>> - The node list_lock contention would be less, since we don't need to
>>> freeze any slab under the node list_lock.
>>>
>>> We can achieve this because there is no concurrent path would manipulate
>>> the partial slab list except the __slab_free() path, which is now
>>> serialized by slab_test_node_partial() under the list_lock.
>>>
>>> Since the slab returned by get_partial() interfaces is not frozen anymore
>>> and no freelist is returned in the partial_context, so we need to use the
>>> introduced freeze_slab() to freeze it and get its freelist.
>>>
>>> Similarly, the slabs on the CPU partial list are not frozen anymore,
>>> we need to freeze_slab() on it before use.
>>>
>>> We can now delete acquire_slab() as it became unused.
>>>
>>> Signed-off-by: Chengming Zhou <zhouchengming@...edance.com>
>>> Reviewed-by: Vlastimil Babka <vbabka@...e.cz>
>>> Tested-by: Hyeonggon Yoo <42.hyeyoo@...il.com>
>>> ---
>>> mm/slub.c | 113 +++++++++++-------------------------------------------
>>> 1 file changed, 23 insertions(+), 90 deletions(-)
>>>
>>> diff --git a/mm/slub.c b/mm/slub.c
>>> index edf567971679..bcb5b2c4e213 100644
>>> --- a/mm/slub.c
>>> +++ b/mm/slub.c
>>> @@ -2234,51 +2234,6 @@ static void *alloc_single_from_new_slab(struct kmem_cache *s,
>>> return object;
>>> }
>>>
>>> -/*
>>> - * Remove slab from the partial list, freeze it and
>>> - * return the pointer to the freelist.
>>> - *
>>> - * Returns a list of objects or NULL if it fails.
>>> - */
>>> -static inline void *acquire_slab(struct kmem_cache *s,
>>> - struct kmem_cache_node *n, struct slab *slab,
>>> - int mode)
>>
>> Nit: alloc_single_from_partial()'s comment still refers to acquire_slab().
>>
>
> Ah, right! It should be changed to remove_partial().
>
> diff --git a/mm/slub.c b/mm/slub.c
> index 437485a2408d..623c17a4cdd6 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -2463,7 +2463,7 @@ static inline void remove_partial(struct kmem_cache_node *n,
> }
>
> /*
> - * Called only for kmem_cache_debug() caches instead of acquire_slab(), with a
> + * Called only for kmem_cache_debug() caches instead of remove_partial(), with a
> * slab from the n->partial list. Remove only a single object from the slab, do
> * the alloc_debug_processing() checks and leave the slab on the list, or move
> * it to full list if it was the last free object.
>
> Hi Vlastimil, could you please help to fold it?
Done, thanks.
Powered by blists - more mailing lists