[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKEwX=MNKY0UHbxi6Zfwf0KkepYavFaZo8F6LGe5GyyE3U35Jg@mail.gmail.com>
Date: Wed, 8 Nov 2023 13:15:17 -0800
From: Nhat Pham <nphamcs@...il.com>
To: Chris Li <chrisl@...nel.org>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Johannes Weiner <hannes@...xchg.org>,
Domenico Cerasuolo <cerasuolodomenico@...il.com>,
Yosry Ahmed <yosryahmed@...gle.com>,
Seth Jennings <sjenning@...hat.com>,
Dan Streetman <ddstreet@...e.org>,
Vitaly Wool <vitaly.wool@...sulko.com>, mhocko@...nel.org,
roman.gushchin@...ux.dev, Shakeel Butt <shakeelb@...gle.com>,
muchun.song@...ux.dev, linux-mm <linux-mm@...ck.org>,
kernel-team@...a.com, LKML <linux-kernel@...r.kernel.org>,
cgroups@...r.kernel.org, linux-doc@...r.kernel.org,
linux-kselftest@...r.kernel.org, shuah@...nel.org
Subject: Re: [PATCH v5 0/6] workload-specific and memory pressure-driven zswap writeback
On Wed, Nov 8, 2023 at 11:46 AM Chris Li <chrisl@...nel.org> wrote:
>
> Hi Nhat,
>
> Sorry for being late to the party. I want to take a look at your patches series.
> However I wasn't able to "git am" your patches series cleanly on current
> mm-stable, mm-unstable or linux tip.
>
> $ git am patches/v5_20231106_nphamcs_workload_specific_and_memory_pressure_driven_zswap_writeback.mbx
> Applying: list_lru: allows explicit memcg and NUMA node selection
> Applying: memcontrol: allows mem_cgroup_iter() to check for onlineness
> Applying: zswap: make shrinking memcg-aware (fix)
> error: patch failed: mm/zswap.c:174
> error: mm/zswap.c: patch does not apply
> Patch failed at 0003 zswap: make shrinking memcg-aware (fix)
Ah that was meant to be a fixlet - so that on top of the original
"zswap: make shrinking memcg-aware" patch. The intention was
to eventually squash it...
But this is getting a bit annoyingly confusing, I admit. I just rebased to
mm-unstable + squashed it all again, then sent one single replacement
patch:
[PATCH v5 3/6 REPLACE] zswap: make shrinking memcg-aware
Let me know if this still fails to apply. If not, I'll send the whole thing
again as v6! My sincerest apologies for the troubles and confusion :(
>
> What is the base of your patches? A git hash or a branch I can pull
> from would be
> nice.
>
> Thanks
>
> Chris
>
> On Mon, Nov 6, 2023 at 10:32 AM Nhat Pham <nphamcs@...il.com> wrote:
> >
> > Changelog:
> > v5:
> > * Replace reference getting with an rcu_read_lock() section for
> > zswap lru modifications (suggested by Yosry)
> > * Add a new prep patch that allows mem_cgroup_iter() to return
> > online cgroup.
> > * Add a callback that updates pool->next_shrink when the cgroup is
> > offlined (suggested by Yosry Ahmed, Johannes Weiner)
> > v4:
> > * Rename list_lru_add to list_lru_add_obj and __list_lru_add to
> > list_lru_add (patch 1) (suggested by Johannes Weiner and
> > Yosry Ahmed)
> > * Some cleanups on the memcg aware LRU patch (patch 2)
> > (suggested by Yosry Ahmed)
> > * Use event interface for the new per-cgroup writeback counters.
> > (patch 3) (suggested by Yosry Ahmed)
> > * Abstract zswap's lruvec states and handling into
> > zswap_lruvec_state (patch 5) (suggested by Yosry Ahmed)
> > v3:
> > * Add a patch to export per-cgroup zswap writeback counters
> > * Add a patch to update zswap's kselftest
> > * Separate the new list_lru functions into its own prep patch
> > * Do not start from the top of the hierarchy when encounter a memcg
> > that is not online for the global limit zswap writeback (patch 2)
> > (suggested by Yosry Ahmed)
> > * Do not remove the swap entry from list_lru in
> > __read_swapcache_async() (patch 2) (suggested by Yosry Ahmed)
> > * Removed a redundant zswap pool getting (patch 2)
> > (reported by Ryan Roberts)
> > * Use atomic for the nr_zswap_protected (instead of lruvec's lock)
> > (patch 5) (suggested by Yosry Ahmed)
> > * Remove the per-cgroup zswap shrinker knob (patch 5)
> > (suggested by Yosry Ahmed)
> > v2:
> > * Fix loongarch compiler errors
> > * Use pool stats instead of memcg stats when !CONFIG_MEMCG_KEM
> >
> > There are currently several issues with zswap writeback:
> >
> > 1. There is only a single global LRU for zswap, making it impossible to
> > perform worload-specific shrinking - an memcg under memory pressure
> > cannot determine which pages in the pool it owns, and often ends up
> > writing pages from other memcgs. This issue has been previously
> > observed in practice and mitigated by simply disabling
> > memcg-initiated shrinking:
> >
> > https://lore.kernel.org/all/20230530232435.3097106-1-nphamcs@gmail.com/T/#u
> >
> > But this solution leaves a lot to be desired, as we still do not
> > have an avenue for an memcg to free up its own memory locked up in
> > the zswap pool.
> >
> > 2. We only shrink the zswap pool when the user-defined limit is hit.
> > This means that if we set the limit too high, cold data that are
> > unlikely to be used again will reside in the pool, wasting precious
> > memory. It is hard to predict how much zswap space will be needed
> > ahead of time, as this depends on the workload (specifically, on
> > factors such as memory access patterns and compressibility of the
> > memory pages).
> >
> > This patch series solves these issues by separating the global zswap
> > LRU into per-memcg and per-NUMA LRUs, and performs workload-specific
> > (i.e memcg- and NUMA-aware) zswap writeback under memory pressure. The
> > new shrinker does not have any parameter that must be tuned by the
> > user, and can be opted in or out on a per-memcg basis.
> >
> > As a proof of concept, we ran the following synthetic benchmark:
> > build the linux kernel in a memory-limited cgroup, and allocate some
> > cold data in tmpfs to see if the shrinker could write them out and
> > improved the overall performance. Depending on the amount of cold data
> > generated, we observe from 14% to 35% reduction in kernel CPU time used
> > in the kernel builds.
> >
> > Domenico Cerasuolo (3):
> > zswap: make shrinking memcg-aware
> > mm: memcg: add per-memcg zswap writeback stat
> > selftests: cgroup: update per-memcg zswap writeback selftest
> >
> > Nhat Pham (3):
> > list_lru: allows explicit memcg and NUMA node selection
> > memcontrol: allows mem_cgroup_iter() to check for onlineness
> > zswap: shrinks zswap pool based on memory pressure
> >
> > Documentation/admin-guide/mm/zswap.rst | 7 +
> > drivers/android/binder_alloc.c | 5 +-
> > fs/dcache.c | 8 +-
> > fs/gfs2/quota.c | 6 +-
> > fs/inode.c | 4 +-
> > fs/nfs/nfs42xattr.c | 8 +-
> > fs/nfsd/filecache.c | 4 +-
> > fs/xfs/xfs_buf.c | 6 +-
> > fs/xfs/xfs_dquot.c | 2 +-
> > fs/xfs/xfs_qm.c | 2 +-
> > include/linux/list_lru.h | 46 ++-
> > include/linux/memcontrol.h | 9 +-
> > include/linux/mmzone.h | 2 +
> > include/linux/vm_event_item.h | 1 +
> > include/linux/zswap.h | 27 +-
> > mm/list_lru.c | 48 ++-
> > mm/memcontrol.c | 20 +-
> > mm/mmzone.c | 1 +
> > mm/shrinker.c | 4 +-
> > mm/swap.h | 3 +-
> > mm/swap_state.c | 26 +-
> > mm/vmscan.c | 26 +-
> > mm/vmstat.c | 1 +
> > mm/workingset.c | 4 +-
> > mm/zswap.c | 430 +++++++++++++++++---
> > tools/testing/selftests/cgroup/test_zswap.c | 74 ++--
> > 26 files changed, 625 insertions(+), 149 deletions(-)
> >
> > --
> > 2.34.1
> >
Powered by blists - more mailing lists