[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Ze_UUeajWWkKpZJ0@google.com>
Date: Tue, 12 Mar 2024 04:04:33 +0000
From: Yosry Ahmed <yosryahmed@...gle.com>
To: Johannes Weiner <hannes@...xchg.org>
Cc: Andrew Morton <akpm@...ux-foundation.org>, Nhat Pham <nphamcs@...il.com>,
Chengming Zhou <zhouchengming@...edance.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/2] mm: zswap: optimize zswap pool size tracking
On Mon, Mar 11, 2024 at 10:34:11PM -0400, Johannes Weiner wrote:
> On Mon, Mar 11, 2024 at 10:09:35PM +0000, Yosry Ahmed wrote:
> > On Mon, Mar 11, 2024 at 12:12:13PM -0400, Johannes Weiner wrote:
> > > Profiling the munmap() of a zswapped memory region shows 50%(!) of the
> > > total cycles currently going into updating the zswap_pool_total_size.
> >
> > Yikes. I have always hated that size update scheme FWIW.
> >
> > I have also wondered whether it makes sense to just maintain the number
> > of pages in zswap as an atomic, like zswap_stored_pages. I guess your
> > proposed scheme is even cheaper for the load/invalidate paths because we
> > do nothing at all. It could be an option if the aggregation in other
> > paths ever becomes a problem, but we would need to make sure it
> > doesn't regress the load/invalidate paths. Just sharing some thoughts.
>
> Agree with you there. I actually tried doing it that way at first, but
> noticed zram uses zs_get_total_pages() and actually wants a per-pool
> count. I didn't want the backend to have to update two atomics, so I
> settled for this version.
Could be useful to document this context if you send a v2. This version
is a big improvement anyway, so hopefully we don' t need to revisit.
>
> > > There are three consumers of this counter:
> > > - store, to enforce the globally configured pool limit
> > > - meminfo & debugfs, to report the size to the user
> > > - shrink, to determine the batch size for each cycle
> > >
> > > Instead of aggregating everytime an entry enters or exits the zswap
> > > pool, aggregate the value from the zpools on-demand:
> > >
> > > - Stores aggregate the counter anyway upon success. Aggregating to
> > > check the limit instead is the same amount of work.
> > >
> > > - Meminfo & debugfs might benefit somewhat from a pre-aggregated
> > > counter, but aren't exactly hotpaths.
> > >
> > > - Shrinking can aggregate once for every cycle instead of doing it for
> > > every freed entry. As the shrinker might work on tens or hundreds of
> > > objects per scan cycle, this is a large reduction in aggregations.
> > >
> > > The paths that benefit dramatically are swapin, swapoff, and
> > > unmaps. There could be millions of pages being processed until
> > > somebody asks for the pool size again. This eliminates the pool size
> > > updates from those paths entirely.
> >
> > This looks like a big win, thanks! I wonder if you have any numbers of
> > perf profiles to share. That would be nice to have, but I think the
> > benefit is clear regardless.
>
> I deleted the perf files already, but can re-run it tomorrow.
Thanks!
>
> > I also like the implicit cleanup when we switch to maintaining the
> > number of pages rather than bytes. The code looks much better with all
> > the shifts and divisions gone :)
> >
> > I have a couple of comments below. With them addressed, feel free to
> > add:
> > Acked-by: Yosry Ahmed <yosryahmed@...gle.com>
>
> Thanks!
>
> > > @@ -1385,6 +1365,10 @@ static void shrink_worker(struct work_struct *w)
> > > {
> > > struct mem_cgroup *memcg;
> > > int ret, failures = 0;
> > > + unsigned long thr;
> > > +
> > > + /* Reclaim down to the accept threshold */
> > > + thr = zswap_max_pages() * zswap_accept_thr_percent / 100;
> >
> > This calculation is repeated twice, so I'd rather keep a helper for it
> > as an alternative to zswap_can_accept(). Perhaps zswap_threshold_page()
> > or zswap_acceptance_pages()?
>
> Sounds good. I went with zswap_accept_thr_pages().
Even better.
>
> > > @@ -1711,6 +1700,13 @@ void zswap_swapoff(int type)
> > >
> > > static struct dentry *zswap_debugfs_root;
> > >
> > > +static int debugfs_get_total_size(void *data, u64 *val)
> > > +{
> > > + *val = zswap_total_pages() * PAGE_SIZE;
> > > + return 0;
> > > +}
> > > +DEFINE_DEBUGFS_ATTRIBUTE(total_size_fops, debugfs_get_total_size, NULL, "%llu");
> >
> > I think we are missing a newline here to maintain the current format
> > (i.e "%llu\n").
>
> Oops, good catch! I had verified the debugfs file (along with the
> others) with 'grep . *', which hides that this is missing. Fixed up.
>
> Thanks for taking a look. The incremental diff is below. I'll run the
> tests and recapture the numbers tomorrow, then send v2.
LGTM. Feel free to carry the Ack forward.
Powered by blists - more mailing lists