[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKEwX=OdeFCPNFzwQsGTsMV-+JB8dfTSbEff_ztENZ-8gwdnJQ@mail.gmail.com>
Date: Mon, 18 Dec 2023 11:21:24 -0800
From: Nhat Pham <nphamcs@...il.com>
To: Johannes Weiner <hannes@...xchg.org>
Cc: Yosry Ahmed <yosryahmed@...gle.com>, akpm@...ux-foundation.org, tj@...nel.org,
lizefan.x@...edance.com, cerasuolodomenico@...il.com, sjenning@...hat.com,
ddstreet@...e.org, vitaly.wool@...sulko.com, mhocko@...nel.org,
roman.gushchin@...ux.dev, shakeelb@...gle.com, muchun.song@...ux.dev,
hughd@...gle.com, corbet@....net, konrad.wilk@...cle.com,
senozhatsky@...omium.org, rppt@...nel.org, linux-mm@...ck.org,
kernel-team@...a.com, linux-kernel@...r.kernel.org, linux-doc@...r.kernel.org,
david@...t.cz, chrisl@...nel.org
Subject: Re: [PATCH v6] zswap: memcontrol: implement zswap writeback disabling
On Mon, Dec 18, 2023 at 6:44 AM Johannes Weiner <hannes@...xchg.org> wrote:
>
> On Fri, Dec 15, 2023 at 01:21:57PM -0800, Yosry Ahmed wrote:
> > On Thu, Dec 7, 2023 at 11:24 AM Nhat Pham <nphamcs@...il.com> wrote:
> > >
> > > During our experiment with zswap, we sometimes observe swap IOs due to
> > > occasional zswap store failures and writebacks-to-swap. These swapping
> > > IOs prevent many users who cannot tolerate swapping from adopting zswap
> > > to save memory and improve performance where possible.
> > >
> > > This patch adds the option to disable this behavior entirely: do not
> > > writeback to backing swapping device when a zswap store attempt fail,
> > > and do not write pages in the zswap pool back to the backing swap
> > > device (both when the pool is full, and when the new zswap shrinker is
> > > called).
> > >
> > > This new behavior can be opted-in/out on a per-cgroup basis via a new
> > > cgroup file. By default, writebacks to swap device is enabled, which is
> > > the previous behavior. Initially, writeback is enabled for the root
> > > cgroup, and a newly created cgroup will inherit the current setting of
> > > its parent.
> > >
> > > Note that this is subtly different from setting memory.swap.max to 0, as
> > > it still allows for pages to be stored in the zswap pool (which itself
> > > consumes swap space in its current form).
> > >
> > > This patch should be applied on top of the zswap shrinker series:
> > >
> > > https://lore.kernel.org/linux-mm/20231130194023.4102148-1-nphamcs@gmail.com/
> > >
> > > as it also disables the zswap shrinker, a major source of zswap
> > > writebacks.
> > >
> > > Suggested-by: Johannes Weiner <hannes@...xchg.org>
> > > Signed-off-by: Nhat Pham <nphamcs@...il.com>
> > > Reviewed-by: Yosry Ahmed <yosryahmed@...gle.com>
> >
> > Taking a step back from all the memory.swap.tiers vs.
> > memory.zswap.writeback discussions, I think there may be a more
> > fundamental problem here. If the zswap store failure is recurrent,
> > pages can keep going back to the LRUs and then sent back to zswap
> > eventually, only to be rejected again. For example, this can if zswap
> > is above the acceptance threshold, but could be even worse if it's the
> > allocator rejecting the page due to not compressing well enough. In
> > the latter case, the page can keep going back and forth between zswap
> > and LRUs indefinitely.
> >
> > You probably did not run into this as you're using zsmalloc, but it
Which is why I recommend everyone to use zsmalloc, and change the
default allocator to it in Kconfig :)
But tongue-in-cheek aside, I think this is fine. As you noted below,
we probably want to try again on that page (for instance, in case its
content has changed and is now more compressible). And as Johannes has
explained, we'll only look at it again once we have scanned everything
else. This sounds acceptable to me.
Now if all of the intermediate pages are also unstoreable as well,
then we have a problem, but that seems unlikely, and perhaps is an
indication that we need to do something else entirely (if the workload
is *that* incompressible, perhaps it is better to just disable zswap
entirely here).
> > can happen with zbud AFAICT. Even with zsmalloc, a less problematic
> > version can happen if zswap is above its acceptance threshold.
> >
> > This can cause thrashing and ineffective reclaim. We have an internal
> > implementation where we mark incompressible pages and put them on the
> > unevictable LRU when we don't have a backing swapfile (i.e. ghost
> > swapfiles), and something similar may work if writeback is disabled.
> > We need to scan such incompressible pages periodically though to
> > remove them from the unevictable LRU if they have been dirited.
>
> I'm not sure this is an actual problem.
>
> When pages get rejected, they rotate to the furthest point from the
> reclaimer - the head of the active list. We only get to them again
> after we scanned everything else.
Agree. That is the reason why we rotate the LRU here - to avoid
touching it again until we have tried other pages.
>
> If all that's left on the LRU is unzswappable, then you'd assume that
> remainder isn't very large, and thus not a significant part of overall
> scan work. Because if it is, then there is a serious problem with the
> zswap configuration.
Agree.
>
> There might be possible optimizations to determine how permanent a
> rejection is, but I'm not sure the effort is called for just
> yet. Rejections are already failure cases that screw up the LRU
> ordering, and healthy setups shouldn't have a lot of those. I don't
> think this patch adds any sort of new complications to this picture.
Yep. This is one of the reasons (among many) why we were toying around
with storing uncompressed pages in zswap - it's one of the failure
cases where trying again (if the page's content has not changed) -
isn't likely to yield a different result, so might as well just retain
the overall LRU ordering and squeeze it in zswap (but as discussed,
that has quite a bit of implications that we need to deal with).
Powered by blists - more mailing lists