lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAMgjq7AjO=Z4Wa3DYaOJdWA+8aNQ1JHZQYKYOm5-SvvgPPOGKg@mail.gmail.com>
Date:   Mon, 11 Dec 2023 17:31:05 +0800
From:   Kairui Song <ryncsn@...il.com>
To:     Chris Li <chrisl@...nel.org>
Cc:     Nhat Pham <nphamcs@...il.com>, akpm@...ux-foundation.org,
        tj@...nel.org, lizefan.x@...edance.com, hannes@...xchg.org,
        cerasuolodomenico@...il.com, yosryahmed@...gle.com,
        sjenning@...hat.com, ddstreet@...e.org, vitaly.wool@...sulko.com,
        mhocko@...nel.org, roman.gushchin@...ux.dev, shakeelb@...gle.com,
        muchun.song@...ux.dev, hughd@...gle.com, corbet@....net,
        konrad.wilk@...cle.com, senozhatsky@...omium.org, rppt@...nel.org,
        linux-mm@...ck.org, kernel-team@...a.com,
        linux-kernel@...r.kernel.org, linux-doc@...r.kernel.org,
        david@...t.cz, Minchan Kim <minchan@...gle.com>,
        Zhongkun He <hezhongkun.hzk@...edance.com>
Subject: Re: [PATCH v6] zswap: memcontrol: implement zswap writeback disabling

Chris Li <chrisl@...nel.org> 于2023年12月9日周六 07:56写道:
>
> Hi Nhat,
>
> On Thu, Dec 7, 2023 at 5:03 PM Nhat Pham <nphamcs@...il.com> wrote:
> >
> > On Thu, Dec 7, 2023 at 4:19 PM Chris Li <chrisl@...nel.org> wrote:
> > >
> > > Hi Nhat,
> > >
> > >
> > > On Thu, Dec 7, 2023 at 11:24 AM Nhat Pham <nphamcs@...il.com> wrote:
> > > >
> > > > During our experiment with zswap, we sometimes observe swap IOs due to
> > > > occasional zswap store failures and writebacks-to-swap. These swapping
> > > > IOs prevent many users who cannot tolerate swapping from adopting zswap
> > > > to save memory and improve performance where possible.
> > > >
> > > > This patch adds the option to disable this behavior entirely: do not
> > > > writeback to backing swapping device when a zswap store attempt fail,
> > > > and do not write pages in the zswap pool back to the backing swap
> > > > device (both when the pool is full, and when the new zswap shrinker is
> > > > called).
> > > >
> > > > This new behavior can be opted-in/out on a per-cgroup basis via a new
> > > > cgroup file. By default, writebacks to swap device is enabled, which is
> > > > the previous behavior. Initially, writeback is enabled for the root
> > > > cgroup, and a newly created cgroup will inherit the current setting of
> > > > its parent.
> > > >
> > > > Note that this is subtly different from setting memory.swap.max to 0, as
> > > > it still allows for pages to be stored in the zswap pool (which itself
> > > > consumes swap space in its current form).
> > > >
> > > > This patch should be applied on top of the zswap shrinker series:
> > > >
> > > > https://lore.kernel.org/linux-mm/20231130194023.4102148-1-nphamcs@gmail.com/
> > > >
> > > > as it also disables the zswap shrinker, a major source of zswap
> > > > writebacks.
> > >
> > > I am wondering about the status of "memory.swap.tiers" proof of concept patch?
> > > Are we still on board to have this two patch merge together somehow so
> > > we can have
> > > "memory.swap.tiers" == "all" and "memory.swap.tiers" == "zswap" cover the
> > > memory.zswap.writeback == 1 and memory.zswap.writeback == 0 case?
> > >
> > > Thanks
> > >
> > > Chris
> > >
> >
> > Hi Chris,
> >
> > I briefly summarized my recent discussion with Johannes here:
> >
> > https://lore.kernel.org/all/CAKEwX=NwGGRAtXoNPfq63YnNLBCF0ZDOdLVRsvzUmYhK4jxzHA@mail.gmail.com/
>
> Sorry I am traveling in a different time zone so not able to get to
> that email sooner. That email is only sent out less than one day
> before the V6 patch right?
>
> >
> > TL;DR is we acknowledge the potential usefulness of swap.tiers
> > interface, but the use case is not quite there yet, so it does not
>
> I disagree about no use case. No use case for Meta != no usage case
> for the rest of the linux kernel community. That mindset really needs
> to shift to do Linux kernel development. Respect other's usage cases.
> It is not just Meta's Linux kernel. It is everybody's Linux kernel.
>
> I can give you three usage cases right now:
> 1) Google producting kernel uses SSD only swap, it is currently on
> pilot. This is not expressible by the memory.zswap.writeback. You can
> set the memory.zswap.max = 0 and memory.zswap.writeback = 1, then SSD
> backed swapfile. But the whole thing feels very clunky, especially
> what you really want is SSD only swap, you need to do all this zswap
> config dance. Google has an internal memory.swapfile feature
> implemented per cgroup swap file type by "zswap only", "real swap file
> only", "both", "none" (the exact keyword might be different). running
> in the production for almost 10 years. The need for more than zswap
> type of per cgroup control is really there.
>
> 2) As indicated by this discussion, Tencent has a usage case for SSD
> and hard disk swap as overflow.
> https://lore.kernel.org/linux-mm/20231119194740.94101-9-ryncsn@gmail.com/
> +Kairui

Yes, we are not using zswap. We are using ZRAM for swap since we have
many different varieties of workload instances, with a very flexible
storage setup. Some of them don't have the ability to set up a
swapfile. So we built a pack of kernel infrastructures based on ZRAM,
which so far worked pretty well.

The concern from some teams is that ZRAM (or zswap) can't always free
up memory so they may lead to higher risk of OOM compared to a
physical swap device, and they do have suitable devices for doing swap
on some of their machines. So a secondary swap support is very helpful
in case of memory usage peak.

Besides this, another requirement is that different containers may
have different priority, some containers can tolerate high swap
overhead while some cannot, so swap tiering is useful for us in many
ways.

And thanks to cloud infrastructure the disk setup could change from
time to time depending on workload requirements, so our requirement is
to support ZRAM (always) + SSD (optional) + HDD (also optional) as
swap backends, while not making things too complex to maintain.

Currently we have implemented a cgroup based ZRAM compression
algorithm control, per-cgroup ZRAM accounting and limit, and a
experimental kernel worker to migrate cold swap entry from high
priority device to low priority device at very small scale (lack of
basic mechanics to do this at large scale, however due to the low IOPS
of slow device and cold pages are rarely accessed, this wasn't too
much of a problem so far but kind of ugly). The rest of swapping (eg.
secondary swap when ZRAM if full) will depend on the kernel's native
ability.

So far it works, not in the best form, need more patches to make it
work better (eg. the swapin/readahead patch I sent previously). Some
of our design may also need to change in the long term, and we also
want a well built interface and kernel mechanics to manage multi tier
swaps, I'm very willing to talk and collaborate on this.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ