[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <cb9c5a05-d97d-49ff-8a69-aed0f5e73f1e@huawei.com>
Date: Thu, 25 Aug 2022 17:48:46 +0800
From: Kefeng Wang <wangkefeng.wang@...wei.com>
To: David Hildenbrand <david@...hat.com>,
Andrew Morton <akpm@...ux-foundation.org>, <linux-mm@...ck.org>
CC: <muchun.song@...ux.dev>, <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 2/2] mm: slince possible data races about pgdat->kswapd
On 2022/8/25 16:22, David Hildenbrand wrote:
> On 25.08.22 04:34, Kefeng Wang wrote:
>> On 2022/8/24 16:24, David Hildenbrand wrote:
>>> On 24.08.22 09:19, Kefeng Wang wrote:
>>>> The pgdat->kswapd could be accessed concurrently by kswapd_run() and
>>>> kcompactd(), it don't be protected by any lock, which could leads to
>>>> data races, adding READ/WRITE_ONCE() to slince it.
>>> Okay, I think this patch here makes it clearer that we really just want
>>> proper synchronization instead of hacking around it.
>>>
>>> What speaks against protecting pgdat->kswapd this using some proper
>>> locking primitive?
>> as comments about kswapd in struct pglist_data, pgdat->kswapd should be
>>
>> protected by mem_hotplug_begin/done(), how about this way?
>>
>> diff --git a/mm/compaction.c b/mm/compaction.c
>> index 640fa76228dd..62018f35242a 100644
>> --- a/mm/compaction.c
>> +++ b/mm/compaction.c
>> @@ -1983,7 +1983,13 @@ static inline bool is_via_compact_memory(int order)
>>
>> static bool kswapd_is_running(pg_data_t *pgdat)
>> {
>> - return pgdat->kswapd && task_is_running(pgdat->kswapd);
>> + bool running;
>> +
>> + mem_hotplug_begin();
>> + running = pgdat->kswapd && task_is_running(pgdat->kswapd);
>> + mem_hotplug_end();
>> +
>> + return running;
>> }
> I'd much rather just use a dedicated lock that does not involve memory
> hotplug.
The issue only occurred due memory hotplug, without mem-hotplug,
the kswapd won't stop or re-run, there is no above issue too, add a new
lock would be duplicated, but the scope of protection is smaller, I could
repost with new lock if no more comment.
>
>
Powered by blists - more mailing lists