[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAMgjq7D9LJgsg4RT640=3E7KMDURbzjt=+RhX_5YX7a2Nk6W+Q@mail.gmail.com>
Date: Sat, 12 Oct 2024 12:48:19 +0800
From: Kairui Song <ryncsn@...il.com>
To: Chengming Zhou <chengming.zhou@...ux.dev>
Cc: Johannes Weiner <hannes@...xchg.org>, Yosry Ahmed <yosryahmed@...gle.com>, linux-mm@...ck.org,
Andrew Morton <akpm@...ux-foundation.org>, Nhat Pham <nphamcs@...il.com>,
Chris Li <chrisl@...nel.org>, Barry Song <v-songbaohua@...o.com>,
"Huang, Ying" <ying.huang@...el.com>, LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] mm/zswap: avoid touching XArray for unnecessary invalidation
On Sat, Oct 12, 2024 at 11:33 AM Chengming Zhou
<chengming.zhou@...ux.dev> wrote:
>
> On 2024/10/12 11:04, Kairui Song wrote:
> > Johannes Weiner <hannes@...xchg.org> 于 2024年10月12日周六 02:28写道:
> >>
> >> On Fri, Oct 11, 2024 at 10:53:31AM -0700, Yosry Ahmed wrote:
> >>> On Fri, Oct 11, 2024 at 10:20 AM Kairui Song <ryncsn@...il.com> wrote:
> >>>>
> >>>> From: Kairui Song <kasong@...cent.com>
> >>>>
> >>>> zswap_invalidation simply calls xa_erase, which acquires the Xarray
> >>>> lock first, then does a look up. This has a higher overhead even if
> >>>> zswap is not used or the tree is empty.
> >>>>
> >>>> So instead, do a very lightweight xa_empty check first, if there is
> >>>> nothing to erase, don't touch the lock or the tree.
> >>
> >> Great idea!
> >>
> >>> XA_STATE(xas, ..);
> >>>
> >>> rcu_read_lock();
> >>> entry = xas_load(&xas);
> >>> if (entry) {
> >>> xas_lock(&xas);
> >>> WARN_ON_ONCE(xas_reload(&xas) != entry);
> >>> xas_store(&xas, NULL);
> >>> xas_unlock(&xas);
> >>> }
> >>> rcu_read_unlock():
> >>
> >> This does the optimization more reliably, and I think we should go
> >> with this version.
> >
> > Hi Yosry and Johannes,
> >
> > This is a good idea. But xa_empty is just much lighweighter, it's just
> > a inlined ( == NULL ) check, so unsurprising it has better performance
> > than xas_load.
> >
> > And surprisingly it's faster than zswap_never_enabled. So I think it
>
> Do you have CONFIG_ZSWAP_DEFAULT_ON enabled? In your case, CPU will go
> to the unlikely branch of static_key every time, which maybe the cause.
No, it's off by default. Maybe it's just noise, the performance
difference is very tiny.
Powered by blists - more mailing lists