[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJD7tkZEJ1hzSoQ-8yG5-TYJBKUXCBNAW8Kdh=FSG3W0f_Dc4w@mail.gmail.com>
Date: Thu, 25 Jan 2024 00:03:52 -0800
From: Yosry Ahmed <yosryahmed@...gle.com>
To: Chengming Zhou <zhouchengming@...edance.com>
Cc: Johannes Weiner <hannes@...xchg.org>, Andrew Morton <akpm@...ux-foundation.org>,
Nhat Pham <nphamcs@...il.com>, Chris Li <chrisl@...nel.org>, Huang Ying <ying.huang@...el.com>,
linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/2] mm: zswap: remove unnecessary tree cleanups in zswap_swapoff()
On Wed, Jan 24, 2024 at 11:53 PM Yosry Ahmed <yosryahmed@...gle.com> wrote:
>
> > Hello,
> >
> > I also thought about this problem for some time, maybe something like below
> > can be changed to fix it? It's likely I missed something, just some thoughts.
> >
> > IMHO, the problem is caused by the different way in which we use zswap entry
> > in the writeback, that should be much like zswap_load().
> >
> > The zswap_load() comes in with the folio locked in swap cache, so it has
> > stable zswap tree to search and lock... But in writeback case, we don't,
> > shrink_memcg_cb() comes in with only a zswap entry with lru list lock held,
> > then release lru lock to get tree lock, which maybe freed already.
> >
> > So we should change here, we read swpentry from entry with lru list lock held,
> > then release lru lock, to try to lock corresponding folio in swap cache,
> > if we success, the following things is much the same like zswap_load().
> > We can get tree lock, to recheck the invalidate race, if no race happened,
> > we can make sure the entry is still right and get refcount of it, then
> > release the tree lock.
>
> Hmm I think you may be onto something here. Moving the swap cache
> allocation ahead before referencing the tree should give us the same
> guarantees as zswap_load() indeed. We can also consolidate the
> invalidate race checks (right now we have one in shrink_memcg_cb() and
> another one inside zswap_writeback_entry()).
>
> We will have to be careful about the error handling path to make sure
> we delete the folio from the swap cache only after we know the tree
> won't be referenced anymore. Anyway, I think this can work.
>
> On a separate note, I think there is a bug in zswap_writeback_entry()
> when we delete a folio from the swap cache. I think we are missing a
> folio_unlock() there.
>
> >
> > The main differences between this writeback with zswap_load() is the handling
> > of lru entry and the tree lifetime. The whole zswap_load() function has the
> > stable reference of zswap tree, but it's not for shrink_memcg_cb() bottom half
> > after __swap_writepage() since we unlock the folio after that. So we can't
> > reference the tree after that.
> >
> > This problem is easy to fix, we can zswap_invalidate_entry(tree, entry) early
> > in tree lock, since thereafter writeback can't fail. BTW, I think we should
> > also zswap_invalidate_entry() early in zswap_load() and only support the
> > zswap_exclusive_loads_enabled mode, but that's another topic.
>
> zswap_invalidate_entry() actually doesn't seem to be using the tree at all.
Never mind, I was looking at zswap_entry_put().
Powered by blists - more mailing lists