[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20241018223855.GC81612@cmpxchg.org>
Date: Fri, 18 Oct 2024 18:38:55 -0400
From: Johannes Weiner <hannes@...xchg.org>
To: Kairui Song <ryncsn@...il.com>
Cc: Matthew Wilcox <willy@...radead.org>, linux-mm@...ck.org,
Andrew Morton <akpm@...ux-foundation.org>,
Yosry Ahmed <yosryahmed@...gle.com>, Nhat Pham <nphamcs@...il.com>,
Chengming Zhou <chengming.zhou@...ux.dev>,
Chris Li <chrisl@...nel.org>, Barry Song <v-songbaohua@...o.com>,
"Huang, Ying" <ying.huang@...el.com>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] mm, zswap: don't touch the XArray lock if there is no
entry to free
On Sat, Oct 19, 2024 at 04:01:18AM +0800, Kairui Song wrote:
> On Sat, Oct 19, 2024 at 3:46 AM Matthew Wilcox <willy@...radead.org> wrote:
> >
> > On Sat, Oct 19, 2024 at 03:25:25AM +0800, Kairui Song wrote:
> > > if (xa_empty(tree))
> > > return;
> > >
> > > - entry = xa_erase(tree, offset);
> > > - if (entry)
> > > + rcu_read_lock();
> > > + entry = xas_load(&xas);
> > > + if (entry) {
> >
> > You should call xas_reset() here. And I'm not sure it's a great idea to
> > spin waiting for the xa lock while holding the RCU read lock? Probably
> > not awful but I could easily be wrong.
Spinlocks already implicitly acquire an RCU read-side lock before
beginning to spin, so we shouldn't be worse for wear by doing this.
> Thanks for the review. I thought about it, that could cancel this optimization.
>
> Oh, and there is a thing I forgot to mention (maybe I should add some
> comments about it?). If xas_load found an entry, that entry must be
> pinned by HAS_CACHE or swap slot count right now, and one entry can
> only be freed once.
> So it should be safe here?
>
> This might be a little fragile though, maybe this optimization can
> better be done after some zswap invalidation path cleanup.
This seems fine too, exlusivity during invalidation is a fundamental
property of swap. If a load were possible, we'd be freeing an entry
with ptes pointing to it (or readahead a slot whose backing space has
been discarded). If a store were possible, we could write new data
into a dead slot and lose it. Even the swapcache bypass path in
do_swap_page() must at least acquire HAS_CACHE due to this.
So from a swap POV, if we find an entry here it's guaranteed to remain
in the tree by the calling context. The xa lock is for protection the
tree structure against concurrent changes (e.g. from adjacent entries).
With that said, is there still a way for the tree to change internally
before we acquire the lock? Such that tree + index might end up
pointing to the same contents in a different memory location?
AFAIK there are two possible ways:
- xas_split() - this shouldn't be possible because we don't do large
entries inside the zswap trees.
- xas_shrink() - this could move the entry from a node to xa->head,
iff it's the last entry in the tree and its index is 0. Swap offset
0 is never a valid swap entry (swap header), but unfortunately we
have split trees so it could happen to any offset that is a multiple
of SWAP_ADDRESS_SPACE_PAGES. AFAICS xas_store() doesn't detect such
a transition. And making it do that honestly sounds a bit hairy...
So this doesn't look safe to me without a reload :(
Powered by blists - more mailing lists