lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <081a9c21-a1d1-4ca2-a58f-e6748ccb429c@bytedance.com>
Date: Fri, 26 Jan 2024 08:10:13 +0800
From: Chengming Zhou <zhouchengming@...edance.com>
To: Yosry Ahmed <yosryahmed@...gle.com>
Cc: Johannes Weiner <hannes@...xchg.org>,
 Andrew Morton <akpm@...ux-foundation.org>, Nhat Pham <nphamcs@...il.com>,
 Chris Li <chrisl@...nel.org>, Huang Ying <ying.huang@...el.com>,
 linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/2] mm: zswap: remove unnecessary tree cleanups in
 zswap_swapoff()

On 2024/1/26 08:05, Yosry Ahmed wrote:
> On Thu, Jan 25, 2024 at 4:03 PM Chengming Zhou
> <zhouchengming@...edance.com> wrote:
>>
>> On 2024/1/25 15:53, Yosry Ahmed wrote:
>>>> Hello,
>>>>
>>>> I also thought about this problem for some time, maybe something like below
>>>> can be changed to fix it? It's likely I missed something, just some thoughts.
>>>>
>>>> IMHO, the problem is caused by the different way in which we use zswap entry
>>>> in the writeback, that should be much like zswap_load().
>>>>
>>>> The zswap_load() comes in with the folio locked in swap cache, so it has
>>>> stable zswap tree to search and lock... But in writeback case, we don't,
>>>> shrink_memcg_cb() comes in with only a zswap entry with lru list lock held,
>>>> then release lru lock to get tree lock, which maybe freed already.
>>>>
>>>> So we should change here, we read swpentry from entry with lru list lock held,
>>>> then release lru lock, to try to lock corresponding folio in swap cache,
>>>> if we success, the following things is much the same like zswap_load().
>>>> We can get tree lock, to recheck the invalidate race, if no race happened,
>>>> we can make sure the entry is still right and get refcount of it, then
>>>> release the tree lock.
>>>
>>> Hmm I think you may be onto something here. Moving the swap cache
>>> allocation ahead before referencing the tree should give us the same
>>> guarantees as zswap_load() indeed. We can also consolidate the
>>> invalidate race checks (right now we have one in shrink_memcg_cb() and
>>> another one inside zswap_writeback_entry()).
>>>
>>> We will have to be careful about the error handling path to make sure
>>> we delete the folio from the swap cache only after we know the tree
>>> won't be referenced anymore. Anyway, I think this can work.
>>>
>>> On a separate note, I think there is a bug in zswap_writeback_entry()
>>> when we delete a folio from the swap cache. I think we are missing a
>>> folio_unlock() there.
>>>
>>
>> Hi, want to know if you are preparing the fix patch, I would just wait to
>> review if you are. Or I can work on it if you are busy with other thing.
> 
> If you're talking about implementing your solution, I was assuming you
> were going to send a patch out (and hoping others would chime in in
> case I missed something).

Ok, I will prepare a patch to send out for further discussion.

> 
> I can take a stab at implementing it if you prefer that, just let me know.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ