lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJD7tkYhOphYbNnwkZfJykii7kAR6PRvZ0pv7R=zhG0vCjxh4A@mail.gmail.com>
Date: Thu, 12 Sep 2024 11:50:26 -0700
From: Yosry Ahmed <yosryahmed@...gle.com>
To: Nhat Pham <nphamcs@...il.com>
Cc: Jesper Dangaard Brouer <hawk@...nel.org>, tj@...nel.org, cgroups@...r.kernel.org, 
	shakeel.butt@...ux.dev, hannes@...xchg.org, lizefan.x@...edance.com, 
	longman@...hat.com, kernel-team@...udflare.com, linux-mm@...ck.org, 
	linux-kernel@...r.kernel.org, mfleming@...udflare.com, 
	joshua.hahnjy@...il.com
Subject: Re: [PATCH V10] cgroup/rstat: Avoid flushing if there is an ongoing
 root flush

On Thu, Sep 12, 2024 at 11:25 AM Nhat Pham <nphamcs@...il.com> wrote:
>
> On Thu, Sep 12, 2024 at 10:28 AM Yosry Ahmed <yosryahmed@...gle.com> wrote:
> >
> > >
> > > I'm not, but Joshua from my team is working on it :)
> >
> > Great, thanks for letting me know!
>
> FWIW, I think the zswap_shrinker_count() path is fairly trivial to
> take care of :)  We only need the stats itself, and you don't even
> need any tree traversal tbh - technically it is most accurate to track
> zswap memory usage of the memcg itself - one atomic counter per
> zswap_lruvec_struct should suffice.

Do you mean per-lruvec or per-memcg?

>
> obj_cgroup_may_zswap() could be more troublesome - we need the entire
> subtree data to make the decision, at each level :) How about this:
>
> 1. Add a per-memcg counter to track zswap memory usage.
>
> 2. At obj_cgroup_may_zswap() time, the logic is unchanged - we
> traverse the tree from current memcg to root memcg, grabbing the
> memcg's counter and check for usage.
>
> 3. At obj_cgroup_charge_zswap() time, we have to perform another
> upward traversal again, to increment the counters. Would this be too
> expensive?
>
> We still need the whole obj_cgroup charging spiel, for memory usage
> purposes, but this should allow us to remove the MEMCG_ZSWAP_B.
> Similarly, another set of counters can be introduced to remove
> MEMCG_ZSWAPPED...
>
> Yosry, Joshua, how do you feel about this design? Step 3 is the part
> where I'm least certain about, but it's the only way I can think of
> that would avoid any flushing action. You have to pay the price of
> stat updates at *some* point :)

In (2) obj_cgroup_may_zswap, the upward flush should get cheaper
because we avoid the stats flush, we just read an atomic counter
instead.

In (3) obj_cgroup_charge_zswap(), we will do an upward traversal and
atomic update. In a lot of cases this can be cheaper than the flush we
avoid, but we'd need to measure it with different hierarchies to be
sure. Keep in mind that if we consume_obj_stock() is not successful
and we fallback to obj_cgroup_charge_pages(), and we already do an
upward traversal. So it may be just fine to do the upward traversal.

So I think the plan sounds good. We just need some perf testing to
make sure (3) does not introduce regressions.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ