[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a5141311-38bf-421e-9058-6c278840fb97@linux.dev>
Date: Mon, 25 Mar 2024 16:49:32 +0800
From: Chengming Zhou <chengming.zhou@...ux.dev>
To: Vlastimil Babka <vbabka@...e.cz>, linke li <lilinke99@...com>
Cc: xujianhao01@...il.com, Christoph Lameter <cl@...ux.com>,
Pekka Enberg <penberg@...nel.org>, David Rientjes <rientjes@...gle.com>,
Joonsoo Kim <iamjoonsoo.kim@....com>,
Andrew Morton <akpm@...ux-foundation.org>,
Roman Gushchin <roman.gushchin@...ux.dev>,
Hyeonggon Yoo <42.hyeyoo@...il.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2] mm/slub: mark racy accesses on slab->slabs
On 2024/3/25 16:48, Vlastimil Babka wrote:
> On 3/21/24 4:48 AM, linke li wrote:
>> The reads of slab->slabs are racy because it may be changed by
>> put_cpu_partial concurrently. In slabs_cpu_partial_show() and
>> show_slab_objects(), slab->slabs is only used for showing information.
>>
>> Data-racy reads from shared variables that are used only for diagnostic
>> purposes should typically use data_race(), since it is normally not a
>> problem if the values are off by a little.
>>
>> This patch is aimed at reducing the number of benign races reported by
>> KCSAN in order to focus future debugging effort on harmful races.
>>
>> Signed-off-by: linke li <lilinke99@...com>
>> Reviewed-by: Chengming Zhou <chengming.zhou@...ux.dev>
>
> Chengming provided feedback to v1 but not offered a Reviewed-by: AFAICS? Or
> maybe will offer it now? :)
Ah, right.
Reviewed-by: Chengming Zhou <chengming.zhou@...ux.dev>
Thanks.
>
> Vlastimil
>
>> ---
>> mm/slub.c | 6 +++---
>> 1 file changed, 3 insertions(+), 3 deletions(-)
>>
>> diff --git a/mm/slub.c b/mm/slub.c
>> index 2ef88bbf56a3..0d700f6ca547 100644
>> --- a/mm/slub.c
>> +++ b/mm/slub.c
>> @@ -6052,7 +6052,7 @@ static ssize_t show_slab_objects(struct kmem_cache *s,
>> else if (flags & SO_OBJECTS)
>> WARN_ON_ONCE(1);
>> else
>> - x = slab->slabs;
>> + x = data_race(slab->slabs);
>> total += x;
>> nodes[node] += x;
>> }
>> @@ -6257,7 +6257,7 @@ static ssize_t slabs_cpu_partial_show(struct kmem_cache *s, char *buf)
>> slab = slub_percpu_partial(per_cpu_ptr(s->cpu_slab, cpu));
>>
>> if (slab)
>> - slabs += slab->slabs;
>> + slabs += data_race(slab->slabs);
>> }
>> #endif
>>
>> @@ -6271,7 +6271,7 @@ static ssize_t slabs_cpu_partial_show(struct kmem_cache *s, char *buf)
>>
>> slab = slub_percpu_partial(per_cpu_ptr(s->cpu_slab, cpu));
>> if (slab) {
>> - slabs = READ_ONCE(slab->slabs);
>> + slabs = data_race(slab->slabs);
>> objects = (slabs * oo_objects(s->oo)) / 2;
>> len += sysfs_emit_at(buf, len, " C%d=%d(%d)",
>> cpu, objects, slabs);
>
Powered by blists - more mailing lists