[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAF8kJuNFfzkDXCX52Y9whz4TcWJMjM0kk-mKvet0Ge0cEOYmsQ@mail.gmail.com>
Date: Sun, 19 Nov 2023 00:29:16 -0800
From: Chris Li <chrisl@...nel.org>
To: Nhat Pham <nphamcs@...il.com>
Cc: Zhongkun He <hezhongkun.hzk@...edance.com>,
Yosry Ahmed <yosryahmed@...gle.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Johannes Weiner <hannes@...xchg.org>,
Seth Jennings <sjenning@...hat.com>,
Dan Streetman <ddstreet@...e.org>,
Vitaly Wool <vitaly.wool@...sulko.com>,
linux-mm <linux-mm@...ck.org>,
LKML <linux-kernel@...r.kernel.org>, Ying <ying.huang@...el.com>
Subject: Re: [External] Re: [PATCH] mm:zswap: fix zswap entry reclamation
failure in two scenarios
On Sat, Nov 18, 2023 at 10:44 AM Nhat Pham <nphamcs@...il.com> wrote:
> > Why do we need to release them?
> > Consider this scenario,there is a lot of data cached in memory and zswap,
> > hit the limit,and shrink_worker will fail. The new coming data will be written
> > directly to swap due to zswap_store failure. Should we free the last one
> > to store the latest one in zswap.
>
> Shameless plug: zswap will much less likely hit the limit (global or
> cgroup) with the shrinker enabled ;) It will proactively reclaim the
> objects way ahead of the limit.
I think that is actually the proper path, by the time it hits the
limit of zpool. That is already too late to shrink zpool to make room.
The shrinker should have guaranteed the amount of pages for write back
purposes to make forward progress.
> It comes with its own can of worms, of course - it's unlikely to work
> for all workloads in its current form, but perhaps worth experimenting
> with/improved upon?
Agree.
Chris
Powered by blists - more mailing lists