[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKEwX=NX5T1AL6jXuW0oonW_GtPOos+oXdWGAE3hxdWQyavBPA@mail.gmail.com>
Date: Thu, 7 Dec 2023 10:15:32 -0800
From: Nhat Pham <nphamcs@...il.com>
To: Chengming Zhou <zhouchengming@...edance.com>
Cc: Vitaly Wool <vitaly.wool@...sulko.com>,
Johannes Weiner <hannes@...xchg.org>,
Michal Hocko <mhocko@...nel.org>,
Seth Jennings <sjenning@...hat.com>,
Dan Streetman <ddstreet@...e.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Yosry Ahmed <yosryahmed@...gle.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 0/7] mm/zswap: optimize the scalability of zswap rb-tree
On Thu, Dec 7, 2023 at 7:18 AM Chengming Zhou
<zhouchengming@...edance.com> wrote:
>
> On 2023/12/7 11:13, Chengming Zhou wrote:
> > On 2023/12/7 04:08, Nhat Pham wrote:
> >> On Wed, Dec 6, 2023 at 1:46 AM Chengming Zhou
> >> <zhouchengming@...edance.com> wrote:
> >>> When testing the zswap performance by using kernel build -j32 in a tmpfs
> >>> directory, I found the scalability of zswap rb-tree is not good, which
> >>> is protected by the only spinlock. That would cause heavy lock contention
> >>> if multiple tasks zswap_store/load concurrently.
> >>>
> >>> So a simple solution is to split the only one zswap rb-tree into multiple
> >>> rb-trees, each corresponds to SWAP_ADDRESS_SPACE_PAGES (64M). This idea is
> >>> from the commit 4b3ef9daa4fc ("mm/swap: split swap cache into 64MB trunks").
> >>>
> >>> Although this method can't solve the spinlock contention completely, it
> >>> can mitigate much of that contention.
> >>
> >> By how much? Do you have any stats to estimate the amount of
> >> contention and the reduction by this patch?
> >
> > Actually, I did some test using the linux-next 20231205 yesterday.
> >
> > Testcase: memory.max = 2G, zswap enabled, make -j32 in tmpfs.
> >
> > 20231205 +patchset
> > 1. !shrinker_enabled: 156s 126s
> > 2. shrinker_enabled: 79s 70s
> >
> > I think your zswap shrinker fix patch can solve !shrinker_enabled case.
> >
> > So will test again today using the new mm-unstable branch.
> >
>
> Updated test data based on today's mm-unstable branch:
>
> mm-unstable +patchset
> 1. !shrinker_enabled: 86s 74s
> 2. shrinker_enabled: 63s 61s
>
> Shows much less optimization for the shrinker_enabled case, but still
> much optimization for the !shrinker_enabled case.
>
> Thanks!
I'm gonna assume this is build time since it makes the zswap shrinker
look pretty good :)
I think this just means some of the gains between this patchset and
the zswap shrinker overlaps. But on the positive note:
a) Both are complementary, i.e enable both (bottom right corner) gives
us the best result.
b) Each individual change improves the runtime. If you disable the
shrinker, then this patch helps tremendously, so we're onto something.
c) The !shrinker_enabled is no longer *too* bad - once again, thanks
for noticing the regression and help me fix it! In fact, every cell
improves compared to the last run. Woohoo!
Powered by blists - more mailing lists