[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAF8kJuOvjJ1ARzAGMVheDgq6tpUM76BZ9GggWj7CB=J3XgU6mw@mail.gmail.com>
Date: Thu, 25 Jan 2024 11:03:47 -0800
From: Chris Li <chrisl@...nel.org>
To: Yosry Ahmed <yosryahmed@...gle.com>
Cc: Chengming Zhou <zhouchengming@...edance.com>, Johannes Weiner <hannes@...xchg.org>,
Andrew Morton <akpm@...ux-foundation.org>, Nhat Pham <nphamcs@...il.com>,
Huang Ying <ying.huang@...el.com>, linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/2] mm: zswap: remove unnecessary tree cleanups in zswap_swapoff()
On Thu, Jan 25, 2024 at 12:02 AM Yosry Ahmed <yosryahmed@...gle.com> wrote:
>
> > > // lru list lock held
> > > shrink_memcg_cb()
> > > swpentry = entry->swpentry
> > > // Don't isolate entry from lru list here, just use list_lru_putback()
> > > spin_unlock(lru list lock)
> > >
> > > folio = __read_swap_cache_async(swpentry)
> > > if (!folio)
> > > return
> > >
> > > if (!folio_was_allocated)
> > > folio_put(folio)
> > > return
> > >
> > > // folio is locked, swapcache is secured against swapoff
> > > tree = get tree from swpentry
> > > spin_lock(&tree->lock)
> >
> > That will not work well with zswap to xarray change. We want to remove
> > the tree lock and only use the xarray lock.
> > The lookup should just hold xarray RCU read lock and return the entry
> > with ref count increased.
>
> In this path, we also invalidate the zswap entry, which would require
> holding the xarray lock anyway.
It will drop the RCU read lock after finding the entry and re-acquire
the xarray spin lock on invalidation. In between there is a brief
moment without locks.
Chris
Powered by blists - more mailing lists