lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAJD7tkaC5eTe8Os6f0VsGx9o06YE8zX1r0R0PPMvExRfcGhPgg@mail.gmail.com>
Date: Thu, 21 Mar 2024 16:57:27 -0700
From: Yosry Ahmed <yosryahmed@...gle.com>
To: Nhat Pham <nphamcs@...il.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>, Johannes Weiner <hannes@...xchg.org>, 
	Chengming Zhou <chengming.zhou@...ux.dev>, linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/2] mm: zswap: remove nr_zswap_stored atomic

On Thu, Mar 21, 2024 at 4:50 PM Nhat Pham <nphamcs@...il.com> wrote:
>
> On Thu, Mar 21, 2024 at 2:09 PM Yosry Ahmed <yosryahmed@...gle.com> wrote:
> >
> > On Tue, Mar 19, 2024 at 7:08 PM Yosry Ahmed <yosryahmed@...gle.com> wrote:
> > >
> > > zswap_nr_stored is used to maintain the number of stored pages in zswap
> > > that are not same-filled pages. It is used in zswap_shrinker_count() to
> > > scale the number of freeable compressed pages by the compression ratio.
> > > That is, to reduce the amount of writeback from zswap with higher
> > > compression ratios as the ROI from IO diminishes.
> > >
> > > However, the need for this counter is questionable due to two reasons:
> > > - It is redundant. The value can be inferred from (zswap_stored_pages -
> > >   zswap_same_filled_pages).
>
> Ah, I forgot about this. For context, nr_stored was originally a
> zswap_pool-specific stat, but I think Chengming has pulled it out and
> converted it into a global pool stat in an earlier patch - yet,
> globally, we already have zswap_stored_pages that is (mostly) the same
> counter.

Thanks for the context.

>
> Might as well use existing counters (zswap_stored_pages) then, rather
> than a newly introduced counter. Probably will shave off a couple
> cycles here and there for the atomic increment/decrement :)
>
> > > - When memcgs are enabled, we use memcg_page_state(memcg,
> > >   MEMCG_ZSWAPPED), which includes same-filled pages anyway (i.e.
> > >   equivalent to zswap_stored_pages).
>
> This is fine I suppose. I was aware of this weird inaccuracy. However,
> for the CONFIG_MEMCG case, it was kinda silly to introduce the counter
> for per-cgroup same filled zswap pages, just for this one purpose, so
> I decided to accept the inaccuracy.
>
> > >
> > > Use zswap_stored_pages instead in zswap_shrinker_count() to keep things
> > > consistent whether memcgs are enabled or not, and add a comment about
> > > the number of freeable pages possibly being scaled down more than it
> > > should if we have lots of same-filled pages (i.e. inflated compression
> > > ratio).
> > >
> > > Remove nr_zswap_stored and one atomic operation in the store and free
> > > paths.
> > >
> > > Signed-off-by: Yosry Ahmed <yosryahmed@...gle.com>
> >
> > Any thoughts on this patch? Should I resend it separately?
>
> Might be worth resending it separately, but up to you and Andrew!

I will resend to add some context and include your R-b, thanks.

>
> Reviewed-by: Nhat Pham <nphamcs@...il.com>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ