[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20241011182831.GC351101@cmpxchg.org>
Date: Fri, 11 Oct 2024 14:28:31 -0400
From: Johannes Weiner <hannes@...xchg.org>
To: Yosry Ahmed <yosryahmed@...gle.com>
Cc: Kairui Song <kasong@...cent.com>, linux-mm@...ck.org,
Andrew Morton <akpm@...ux-foundation.org>,
Nhat Pham <nphamcs@...il.com>,
Chengming Zhou <chengming.zhou@...ux.dev>,
Chris Li <chrisl@...nel.org>, Barry Song <v-songbaohua@...o.com>,
"Huang, Ying" <ying.huang@...el.com>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] mm/zswap: avoid touching XArray for unnecessary
invalidation
On Fri, Oct 11, 2024 at 10:53:31AM -0700, Yosry Ahmed wrote:
> On Fri, Oct 11, 2024 at 10:20 AM Kairui Song <ryncsn@...il.com> wrote:
> >
> > From: Kairui Song <kasong@...cent.com>
> >
> > zswap_invalidation simply calls xa_erase, which acquires the Xarray
> > lock first, then does a look up. This has a higher overhead even if
> > zswap is not used or the tree is empty.
> >
> > So instead, do a very lightweight xa_empty check first, if there is
> > nothing to erase, don't touch the lock or the tree.
Great idea!
> XA_STATE(xas, ..);
>
> rcu_read_lock();
> entry = xas_load(&xas);
> if (entry) {
> xas_lock(&xas);
> WARN_ON_ONCE(xas_reload(&xas) != entry);
> xas_store(&xas, NULL);
> xas_unlock(&xas);
> }
> rcu_read_unlock();
This does the optimization more reliably, and I think we should go
with this version.
First, swapcache is size-targeted to 50% of total swap capacity (see
vm_swap_full()), and swap is rarely full. Second, entries in swapcache
don't hold on to zswap copies. In combination, this means that after
pressure spikes we routinely end up with many swapcache entries and
only a few zswap entries. Those few zswapped entries would defeat the
optimization when invalidating the many swapcached entries.
So checking on a per-entry basis makes a lot of sense.
Powered by blists - more mailing lists