[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJD7tkY=qy+dfKacFOBx4uW6hDJwf20ODBgVWRP919hEY5URnQ@mail.gmail.com>
Date: Thu, 25 Jan 2024 00:01:30 -0800
From: Yosry Ahmed <yosryahmed@...gle.com>
To: Chris Li <chrisl@...nel.org>
Cc: Chengming Zhou <zhouchengming@...edance.com>, Johannes Weiner <hannes@...xchg.org>,
Andrew Morton <akpm@...ux-foundation.org>, Nhat Pham <nphamcs@...il.com>,
Huang Ying <ying.huang@...el.com>, linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/2] mm: zswap: remove unnecessary tree cleanups in zswap_swapoff()
> > // lru list lock held
> > shrink_memcg_cb()
> > swpentry = entry->swpentry
> > // Don't isolate entry from lru list here, just use list_lru_putback()
> > spin_unlock(lru list lock)
> >
> > folio = __read_swap_cache_async(swpentry)
> > if (!folio)
> > return
> >
> > if (!folio_was_allocated)
> > folio_put(folio)
> > return
> >
> > // folio is locked, swapcache is secured against swapoff
> > tree = get tree from swpentry
> > spin_lock(&tree->lock)
>
> That will not work well with zswap to xarray change. We want to remove
> the tree lock and only use the xarray lock.
> The lookup should just hold xarray RCU read lock and return the entry
> with ref count increased.
In this path, we also invalidate the zswap entry, which would require
holding the xarray lock anyway.
Powered by blists - more mailing lists