[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <97551857-b2fe-eb26-88a0-780951b873d7@linux.alibaba.com>
Date: Fri, 7 Apr 2023 10:20:48 +0800
From: Rongwei Wang <rongwei.wang@...ux.alibaba.com>
To: Aaron Lu <aaron.lu@...el.com>
Cc: akpm@...ux-foundation.org, bagasdotme@...il.com,
willy@...radead.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, stable@...r.kernel.org
Subject: Re: [PATCH v2] mm/swap: fix swap_info_struct race between swapoff and
get_swap_pages()
On 2023/4/6 22:57, Aaron Lu wrote:
> On Thu, Apr 06, 2023 at 10:04:16PM +0800, Aaron Lu wrote:
>> On Tue, Apr 04, 2023 at 11:47:16PM +0800, Rongwei Wang wrote:
>>> The si->lock must be held when deleting the si from
>>> the available list. Otherwise, another thread can
>>> re-add the si to the available list, which can lead
>>> to memory corruption. The only place we have found
>>> where this happens is in the swapoff path. This case
>>> can be described as below:
>>>
>>> core 0 core 1
>>> swapoff
>>>
>>> del_from_avail_list(si) waiting
>>>
>>> try lock si->lock acquire swap_avail_lock
>>> and re-add si into
>>> swap_avail_head
>> confused here.
>>
>> If del_from_avail_list(si) finished in swaoff path, then this si should
>> not exist in any of the per-node avail list and core 1 should not be
>> able to re-add it.
> I think a possible sequence could be like this:
>
> cpuX cpuY
> swapoff put_swap_folio()
>
> del_from_avail_list(si)
> taken si->lock
> spin_lock(&si->lock);
>
> swap_range_free()
> was_full && SWP_WRITEOK -> re-add!
> drop si->lock
>
> taken si->lock
> proceed removing si
>
> End result: si left on avail_list after being swapped off.
>
> The problem is, in add_to_avail_list(), it has no idea this si is being
> swapped off and taking si->lock then del_from_avail_list() could avoid
> this problem, so I think this patch did the right thing but the
> changelog about how this happened needs updating and after that:
Hi Aaron
That's my fault. Actually, I don't refers specifically to
swap_range_free() path in commit, mainly because cpuY can stand all
threads which is waiting swap_avail_lock, then to call
add_to_avail_list(), not only swap_range_free(), e.g. maybe swapon().
>
> Reviewed-by: Aaron Lu <aaron.lu@...el.com>
Thanks for your time.
-wrw
>
> Thanks,
> Aaron
>
Powered by blists - more mailing lists