[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAMgjq7BXXSHKJXijpB_FfNA9N=dh5uWHBJmHrJKoLOShrqvDYA@mail.gmail.com>
Date: Wed, 20 Dec 2023 18:21:28 +0800
From: Kairui Song <ryncsn@...il.com>
To: Chris Li <chrisl@...nel.org>
Cc: Nhat Pham <nphamcs@...il.com>, akpm@...ux-foundation.org, tj@...nel.org,
lizefan.x@...edance.com, hannes@...xchg.org, cerasuolodomenico@...il.com,
yosryahmed@...gle.com, sjenning@...hat.com, ddstreet@...e.org,
vitaly.wool@...sulko.com, mhocko@...nel.org, roman.gushchin@...ux.dev,
shakeelb@...gle.com, muchun.song@...ux.dev, hughd@...gle.com, corbet@....net,
konrad.wilk@...cle.com, senozhatsky@...omium.org, rppt@...nel.org,
linux-mm@...ck.org, kernel-team@...a.com, linux-kernel@...r.kernel.org,
linux-doc@...r.kernel.org, david@...t.cz, Minchan Kim <minchan@...gle.com>,
Zhongkun He <hezhongkun.hzk@...edance.com>
Subject: Re: [PATCH v6] zswap: memcontrol: implement zswap writeback disabling
Chris Li <chrisl@...nel.org> 于2023年12月13日周三 07:39写道:
>
> Hi Kairui,
>
> Thanks for sharing the information on how you use swap.
Hi Chris,
>
> On Mon, Dec 11, 2023 at 1:31 AM Kairui Song <ryncsn@...il.com> wrote:
> > > 2) As indicated by this discussion, Tencent has a usage case for SSD
> > > and hard disk swap as overflow.
> > > https://lore.kernel.org/linux-mm/20231119194740.94101-9-ryncsn@gmail.com/
> > > +Kairui
> >
> > Yes, we are not using zswap. We are using ZRAM for swap since we have
> > many different varieties of workload instances, with a very flexible
> > storage setup. Some of them don't have the ability to set up a
> > swapfile. So we built a pack of kernel infrastructures based on ZRAM,
> > which so far worked pretty well.
>
> This is great. The usage case is actually much more than I expected.
> For example, I never thought of zram as a swap tier. Now you mention
> it. I am considering whether it makes sense to add zram to the
> memory.swap.tiers as well as zswap.
>
> >
> > The concern from some teams is that ZRAM (or zswap) can't always free
> > up memory so they may lead to higher risk of OOM compared to a
> > physical swap device, and they do have suitable devices for doing swap
> > on some of their machines. So a secondary swap support is very helpful
> > in case of memory usage peak.
> >
> > Besides this, another requirement is that different containers may
> > have different priority, some containers can tolerate high swap
> > overhead while some cannot, so swap tiering is useful for us in many
> > ways.
> >
> > And thanks to cloud infrastructure the disk setup could change from
> > time to time depending on workload requirements, so our requirement is
> > to support ZRAM (always) + SSD (optional) + HDD (also optional) as
> > swap backends, while not making things too complex to maintain.
>
> Just curious, do you use ZRAM + SSD + HDD all enabled? Do you ever
> consider moving data from ZRAM to SSD, or from SSD to HDD? If you do,
> I do see the possibility of having more general swap tiers support and
> sharing the shrinking code between tiers somehow. Granted there are
> many unanswered questions and a lot of infrastructure is lacking.
> Gathering requirements, weight in the priority of the quirement is the
> first step towards a possible solution.
Sorry for the late response. Yes, it's our plan to use ZRAM + SSD +
HDD all enabled when possible. Alghouth currently only ZRAM + SSD is
expected.
I see this discussion is still going one so just add some info here...
We have some test environments which have a kernel worker enabled to
move data from ZRAM to SSD, and from SSD to HDD too, to free up space
for higher tier swap devices. The kworker is simple, it maintains a
swap entry LRU for every swap device (maybe worth noting here, there
is currently no LRU bases writeback for ZRAM, and ZRAM writeback
require a fixed block device on init, and a swap device level LRU is
also helpful for migrating entry from SSD to HDD). It walks the page
table to swap in coldest swap entry then swap out immediately to a
lower tier, doing this page by page periodically. Overhead and memory
footprint is minimal with limited moving rate, but the efficiency for
large scaled data moving is terrible so it only has very limited
usage. I was trying to come up with a better design but am currently
not working on it.
Powered by blists - more mailing lists