lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5psrsuvzabh2gwj7lmf6p2swgw4d4svi2zqr4p6bmmfjodspcw@fexbskbtchs7>
Date: Wed, 14 Aug 2024 16:42:31 -0700
From: Shakeel Butt <shakeel.butt@...ux.dev>
To: Nhat Pham <nphamcs@...il.com>
Cc: Jesper Dangaard Brouer <hawk@...nel.org>, 
	Yosry Ahmed <yosryahmed@...gle.com>, Andrew Morton <akpm@...ux-foundation.org>, 
	Johannes Weiner <hannes@...xchg.org>, Michal Hocko <mhocko@...nel.org>, 
	Roman Gushchin <roman.gushchin@...ux.dev>, Muchun Song <muchun.song@...ux.dev>, Yu Zhao <yuzhao@...gle.com>, 
	linux-mm@...ck.org, linux-kernel@...r.kernel.org, 
	Meta kernel team <kernel-team@...a.com>, cgroups@...r.kernel.org
Subject: Re: [PATCH v2] memcg: use ratelimited stats flush in the reclaim

On Wed, Aug 14, 2024 at 04:03:13PM GMT, Nhat Pham wrote:
> On Wed, Aug 14, 2024 at 9:32 AM Shakeel Butt <shakeel.butt@...ux.dev> wrote:
> >
> >
> > Ccing Nhat
> >
> > On Wed, Aug 14, 2024 at 02:57:38PM GMT, Jesper Dangaard Brouer wrote:
> > > I suspect the next whac-a-mole will be the rstat flush for the slab code
> > > that kswapd also activates via shrink_slab, that via
> > > shrinker->count_objects() invoke count_shadow_nodes().
> > >
> >
> > Actually count_shadow_nodes() is already using ratelimited version.
> > However zswap_shrinker_count() is still using the sync version. Nhat is
> > modifying this code at the moment and we can ask if we really need most
> > accurate values for MEMCG_ZSWAP_B and MEMCG_ZSWAPPED for the zswap
> > writeback heuristic.
> 
> You are referring to this, correct:
> 
> mem_cgroup_flush_stats(memcg);
> nr_backing = memcg_page_state(memcg, MEMCG_ZSWAP_B) >> PAGE_SHIFT;
> nr_stored = memcg_page_state(memcg, MEMCG_ZSWAPPED);
> 
> It's already a bit less-than-accurate - as you pointed out in another
> discussion, it takes into account the objects and sizes of the entire
> subtree, rather than just the ones charged to the current (memcg,
> node) combo. Feel free to optimize this away!
> 
> In fact, I should probably replace this with another (atomic?) counter
> in zswap_lruvec_state struct, which tracks the post-compression size.
> That way, we'll have a better estimate of the compression factor -
> total post-compression size /  (length of LRU * page size), and
> perhaps avoid the whole stat flushing path altogether...
> 

That sounds like much better solution than relying on rstat for accurate
stats.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ