lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230927203929.GA399644@cmpxchg.org>
Date:   Wed, 27 Sep 2023 16:39:29 -0400
From:   Johannes Weiner <hannes@...xchg.org>
To:     Yosry Ahmed <yosryahmed@...gle.com>
Cc:     Nhat Pham <nphamcs@...il.com>, akpm@...ux-foundation.org,
        cerasuolodomenico@...il.com, sjenning@...hat.com,
        ddstreet@...e.org, vitaly.wool@...sulko.com, mhocko@...nel.org,
        roman.gushchin@...ux.dev, shakeelb@...gle.com,
        muchun.song@...ux.dev, linux-mm@...ck.org, kernel-team@...a.com,
        linux-kernel@...r.kernel.org, cgroups@...r.kernel.org,
        Chris Li <chrisl@...nel.org>
Subject: Re: [PATCH v2 1/2] zswap: make shrinking memcg-aware

On Tue, Sep 26, 2023 at 11:37:17AM -0700, Yosry Ahmed wrote:
> On Tue, Sep 26, 2023 at 11:24 AM Johannes Weiner <hannes@...xchg.org> wrote:
> >
> > On Mon, Sep 25, 2023 at 01:17:04PM -0700, Yosry Ahmed wrote:
> > > +Chris Li
> > >
> > > On Tue, Sep 19, 2023 at 10:14 AM Nhat Pham <nphamcs@...il.com> wrote:
> > > >
> > > > From: Domenico Cerasuolo <cerasuolodomenico@...il.com>
> > > >
> > > > Currently, we only have a single global LRU for zswap. This makes it
> > > > impossible to perform worload-specific shrinking - an memcg cannot
> > > > determine which pages in the pool it owns, and often ends up writing
> > > > pages from other memcgs. This issue has been previously observed in
> > > > practice and mitigated by simply disabling memcg-initiated shrinking:
> > > >
> > > > https://lore.kernel.org/all/20230530232435.3097106-1-nphamcs@gmail.com/T/#u
> > > >
> > > > This patch fully resolves the issue by replacing the global zswap LRU
> > > > with memcg- and NUMA-specific LRUs, and modify the reclaim logic:
> > > >
> > > > a) When a store attempt hits an memcg limit, it now triggers a
> > > >    synchronous reclaim attempt that, if successful, allows the new
> > > >    hotter page to be accepted by zswap.
> > > > b) If the store attempt instead hits the global zswap limit, it will
> > > >    trigger an asynchronous reclaim attempt, in which an memcg is
> > > >    selected for reclaim in a round-robin-like fashion.
> > >
> > > Hey Nhat,
> > >
> > > I didn't take a very close look as I am currently swamped, but going
> > > through the patch I have some comments/questions below.
> > >
> > > I am not very familiar with list_lru, but it seems like the existing
> > > API derives the node and memcg from the list item itself. Seems like
> > > we can avoid a lot of changes if we allocate struct zswap_entry from
> > > the same node as the page, and account it to the same memcg. Would
> > > this be too much of a change or too strong of a restriction? It's a
> > > slab allocation and we will free memory on that node/memcg right
> > > after.
> >
> > My 2c, but I kind of hate that assumption made by list_lru.
> >
> > We ran into problems with it with the THP shrinker as well. That one
> > strings up 'struct page', and virt_to_page(page) results in really fun
> > to debug issues.
> >
> > IMO it would be less error prone to have memcg and nid as part of the
> > regular list_lru_add() function signature. And then have an explicit
> > list_lru_add_obj() that does a documented memcg lookup.
> 
> I also didn't like/understand that assumption, but again I don't have
> enough familiarity with the code to judge, and I don't know why it was
> done that way. Adding memcg and nid as arguments to the standard
> list_lru API makes the pill easier to swallow. In any case, this
> should be done in a separate patch to make the diff here more focused
> on zswap changes.

Just like the shrinker, it was initially written for slab objects like
dentries and inodes. Since all the users then passed in charged slab
objects, it ended up hardcoding that assumption in the interface.

Once that assumption no longer holds, IMO we should update the
interface. But agreed, a separate patch makes sense.

> > Because of the overhead, we've been selective about the memory we
> > charge. I'd hesitate to do it just to work around list_lru.
> 
> On the other hand I am worried about the continuous growth of struct
> zswap_entry. It's now at ~10 words on 64-bit? That's ~2% of the size
> of the page getting compressed if I am not mistaken. So I am skeptical
> about storing the nid there.
> 
> A middle ground would be allocating struct zswap_entry on the correct
> node without charging it. We don't need to store the nid and we don't
> need to charge struct zswap_entry. It doesn't get rid of
> virt_to_page() though.

I like that idea.

The current list_lru_add() would do the virt_to_page() too, so from
that POV it's a lateral move. But we'd save the charging and the extra
nid member.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ