[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <9578474c-2e46-4d3e-9a2f-1eaeb9bfabbc@oracle.com>
Date: Wed, 13 Mar 2024 17:38:21 -0700
From: Jianfeng Wang <jianfeng.w.wang@...cle.com>
To: "Christoph Lameter (Ampere)" <cl@...ux.com>,
Vlastimil Babka <vbabka@...e.cz>
Cc: Chengming Zhou <chengming.zhou@...ux.dev>,
David Rientjes <rientjes@...gle.com>, penberg@...nel.org,
iamjoonsoo.kim@....com, akpm@...ux-foundation.org,
roman.gushchin@...ux.dev, 42.hyeyoo@...il.com, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] slub: avoid scanning all partial slabs in get_slabinfo()
On 2/28/24 1:51 AM, Chengming Zhou wrote:
> On 2024/2/28 06:55, Christoph Lameter (Ampere) wrote:
>> On Tue, 27 Feb 2024, Chengming Zhou wrote:
>>
>>>> We could mark the state change (list ownership) in the slab metadata and then abort the scan if the state mismatches the list.
>>>
>>> It seems feasible, maybe something like below?
>>>
>>> But this way needs all kmem_caches have SLAB_TYPESAFE_BY_RCU, right?
>>
>> No.
>>
>> If a slab is freed to the page allocator and the fields are reused in a different way then we would have to wait till the end of the RCU period. This could be done with a deferred free. Otherwise we have the type checking to ensure that nothing untoward happens in the RCU period.
>>
>> The usually shuffle of the pages between freelists/cpulists/cpuslab and fully used slabs would not require that.
>
> IIUC, your method doesn't need the slab struct (page) to be delay freed by RCU.
>
> So that page struct maybe reused to anything by buddy, even maybe freed, right?
>
> Not sure the RCU read lock protection is enough here, do we need to hold other
> lock, like memory hotplug lock?
>
>>
>>> Not sure if this is acceptable? Which may cause random delay of memory free.
>>>
>>> ```
>>> retry:
>>> rcu_read_lock();
>>>
>>> h = rcu_dereference(list_next_rcu(&n->partial));
>>>
>>> while (h != &n->partial) {
>>
>> Hmm... a linked list that forms a circle? Linked lists usually terminate in a NULL pointer.
>
> I think the node partial list should be a double-linked list? Since we need to
> add slab to its head or tail.
>
>>
>> So this would be
>>
>>
>> redo:
>>
>> <zap counters>
>> rcu_read_lock();
>> h = <first>;
>>
>> while (h && h->type == <our type>) {
>> <count h somethings>
>>
>> /* Maybe check h->type again */
>> if (h->type != <our_type>)
>> break;
>
> Another problem of this lockless recheck is that we may get a very false value:
> say a slab removed from the node list, then be added to our list in another position,
> so passed our recheck conditions here. Which may cause our counting is very mistaken?
>
> Thanks!
>
>>
>> h = <next>;
>> }
>>
>> rcu_read_unlock();
>>
>>
>> if (!h) /* Type of list changed under us */
>> goto redo;
>>
>>
>> The check for type == <our_type> is racy. Maybe we can ignore that or we could do something additional.
>>
>> Using RCU does not make sense if you add locking in the inner loop. Then it gets too complicated and causes delay. This must be a simple fast lockless loop in order to do what we need.
>>
>> Presumably the type and list pointers are in the same cacheline and thus could made to be updated in a coherent way if properly sequenced with fences etc.
I am not sure that the RCU change will solve the lockup problem.
The reason is that iterating a super long list of partial slabs is a problem by itself, e.g., on a
non-preemptive kernel, then count_partial() can be stuck in the loop for a while, which can cause problems.
Also, even if we check the list ownership for slabs, we may spend too much time in the loop if no updater shows up,
or fail and re-do many times the loop if several updates happen. The latter can exacerbate this lockup issue. So,
in the end, reading /proc/slabinfo can take a super long time just for a counter that may be changing all the time.
Thus, I prefer the "guesstimate" approach, even if the number is inaccurate or biased. Let me know if this makes sense.
Powered by blists - more mailing lists