[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJD7tkYuYEsKFvjKKRxOx3fCekA03jPpOpmV7T20q=9K=Jb2bA@mail.gmail.com>
Date: Fri, 22 Mar 2024 12:37:27 -0700
From: Yosry Ahmed <yosryahmed@...gle.com>
To: chengming.zhou@...ux.dev
Cc: hannes@...xchg.org, nphamcs@...il.com, akpm@...ux-foundation.org,
linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Zhongkun He <hezhongkun.hzk@...edance.com>
Subject: Re: [RFC PATCH] mm: add folio in swapcache if swapin from zswap
On Fri, Mar 22, 2024 at 9:40 AM <chengming.zhou@...ux.dev> wrote:
>
> From: Chengming Zhou <chengming.zhou@...ux.dev>
>
> There is a report of data corruption caused by double swapin, which is
> only possible in the skip swapcache path on SWP_SYNCHRONOUS_IO backends.
>
> The root cause is that zswap is not like other "normal" swap backends,
> it won't keep the copy of data after the first time of swapin. So if
> the folio in the first time of swapin can't be installed in the pagetable
> successfully and we just free it directly. Then in the second time of
> swapin, we can't find anything in zswap and read wrong data from swapfile,
> so this data corruption problem happened.
>
> We can fix it by always adding the folio into swapcache if we know the
> pinned swap entry can be found in zswap, so it won't get freed even though
> it can't be installed successfully in the first time of swapin.
A concurrent faulting thread could have already checked the swapcache
before we add the folio to it, right? In this case, that thread will
go ahead and call swap_read_folio() anyway.
Also, I suspect the zswap lookup might hurt performance. Would it be
better to add the folio back to zswap upon failure? This should be
detectable by checking if the folio is dirty as I mentioned in the bug
report thread.
Powered by blists - more mailing lists