[<prev] [next>] [day] [month] [year] [list]
Message-ID: <0dd0bedf-a6de-4176-8c2e-6abab2aed3fc@arm.com>
Date: Tue, 17 Oct 2023 19:25:52 +0100
From: Ryan Roberts <ryan.roberts@....com>
To: Domenico Cerasuolo <cerasuolodomenico@...il.com>
Cc: Nhat Pham <nphamcs@...il.com>, akpm@...ux-foundation.org,
hannes@...xchg.org, yosryahmed@...gle.com, sjenning@...hat.com,
ddstreet@...e.org, vitaly.wool@...sulko.com, mhocko@...nel.org,
roman.gushchin@...ux.dev, shakeelb@...gle.com,
muchun.song@...ux.dev, linux-mm@...ck.org, kernel-team@...a.com,
linux-kernel@...r.kernel.org, cgroups@...r.kernel.org
Subject: Re: [PATCH v2 1/2] zswap: make shrinking memcg-aware
On 17/10/2023 18:56, Domenico Cerasuolo wrote:
>
>
> On Tue, Oct 17, 2023 at 7:44 PM Ryan Roberts <ryan.roberts@....com
> <mailto:ryan.roberts@....com>> wrote:
>
> On 19/09/2023 18:14, Nhat Pham wrote:
> > From: Domenico Cerasuolo <cerasuolodomenico@...il.com
> <mailto:cerasuolodomenico@...il.com>>
> >
> > Currently, we only have a single global LRU for zswap. This makes it
> > impossible to perform worload-specific shrinking - an memcg cannot
> > determine which pages in the pool it owns, and often ends up writing
> > pages from other memcgs. This issue has been previously observed in
> > practice and mitigated by simply disabling memcg-initiated shrinking:
> >
> >
> https://lore.kernel.org/all/20230530232435.3097106-1-nphamcs@gmail.com/T/#u
> <https://lore.kernel.org/all/20230530232435.3097106-1-nphamcs@gmail.com/T/#u>
> >
> > This patch fully resolves the issue by replacing the global zswap LRU
> > with memcg- and NUMA-specific LRUs, and modify the reclaim logic:
> >
> > a) When a store attempt hits an memcg limit, it now triggers a
> > synchronous reclaim attempt that, if successful, allows the new
> > hotter page to be accepted by zswap.
> > b) If the store attempt instead hits the global zswap limit, it will
> > trigger an asynchronous reclaim attempt, in which an memcg is
> > selected for reclaim in a round-robin-like fashion.
> >
> > Signed-off-by: Domenico Cerasuolo <cerasuolodomenico@...il.com
> <mailto:cerasuolodomenico@...il.com>>
> > Co-developed-by: Nhat Pham <nphamcs@...il.com <mailto:nphamcs@...il.com>>
> > Signed-off-by: Nhat Pham <nphamcs@...il.com <mailto:nphamcs@...il.com>>
> > ---
> > include/linux/list_lru.h | 39 +++++++
> > include/linux/memcontrol.h | 5 +
> > include/linux/zswap.h | 9 ++
> > mm/list_lru.c | 46 ++++++--
> > mm/swap_state.c | 19 ++++
> > mm/zswap.c | 221 +++++++++++++++++++++++++++++--------
> > 6 files changed, 287 insertions(+), 52 deletions(-)
> >
>
> [...]
>
> > @@ -1199,8 +1272,10 @@ bool zswap_store(struct folio *folio)
> > struct scatterlist input, output;
> > struct crypto_acomp_ctx *acomp_ctx;
> > struct obj_cgroup *objcg = NULL;
> > + struct mem_cgroup *memcg = NULL;
> > struct zswap_pool *pool;
> > struct zpool *zpool;
> > + int lru_alloc_ret;
> > unsigned int dlen = PAGE_SIZE;
> > unsigned long handle, value;
> > char *buf;
> > @@ -1218,14 +1293,15 @@ bool zswap_store(struct folio *folio)
> > if (!zswap_enabled || !tree)
> > return false;
> >
> > - /*
> > - * XXX: zswap reclaim does not work with cgroups yet. Without a
> > - * cgroup-aware entry LRU, we will push out entries system-wide based on
> > - * local cgroup limits.
> > - */
> > objcg = get_obj_cgroup_from_folio(folio);
> > - if (objcg && !obj_cgroup_may_zswap(objcg))
> > - goto reject;
> > + if (objcg && !obj_cgroup_may_zswap(objcg)) {
> > + memcg = get_mem_cgroup_from_objcg(objcg);
> > + if (shrink_memcg(memcg)) {
> > + mem_cgroup_put(memcg);
> > + goto reject;
> > + }
> > + mem_cgroup_put(memcg);
> > + }
> >
> > /* reclaim space if needed */
> > if (zswap_is_full()) {
> > @@ -1240,7 +1316,11 @@ bool zswap_store(struct folio *folio)
> > else
> > zswap_pool_reached_full = false;
> > }
> > -
> > + pool = zswap_pool_current_get();
> > + if (!pool) {
> > + ret = -EINVAL;
> > + goto reject;
> > + }
>
>
> Hi, I'm working to add support for large folios within zswap, and noticed this
> piece of code added by this change. I don't see any corresponding put. Have I
> missed some detail or is there a bug here?
>
>
> > /* allocate entry */
> > entry = zswap_entry_cache_alloc(GFP_KERNEL);
> > if (!entry) {
> > @@ -1256,6 +1336,7 @@ bool zswap_store(struct folio *folio)
> > entry->length = 0;
> > entry->value = value;
> > atomic_inc(&zswap_same_filled_pages);
> > + zswap_pool_put(pool);
>
> I see you put it in this error path, but after that, there is no further
> mention.
>
> > goto insert_entry;
> > }
> > kunmap_atomic(src);
> > @@ -1264,6 +1345,15 @@ bool zswap_store(struct folio *folio)
> > if (!zswap_non_same_filled_pages_enabled)
> > goto freepage;
> >
> > + if (objcg) {
> > + memcg = get_mem_cgroup_from_objcg(objcg);
> > + lru_alloc_ret = memcg_list_lru_alloc(memcg, &pool->list_lru,
> GFP_KERNEL);
> > + mem_cgroup_put(memcg);
> > +
> > + if (lru_alloc_ret)
> > + goto freepage;
> > + }
> > +
> > /* if entry is successfully added, it keeps the reference */
> > entry->pool = zswap_pool_current_get();
>
> The entry takes it's reference to the pool here.
>
> Thanks,
> Ryan
>
>
> Thanks Ryan, I think you're right. Coincidentally, we're about to send a new
> version of the series, and will make sure to address this too.
Ahh... I'm on top of mm-unstable - for some reason I thought I was on an rc and
this was already in. I guess it's less of an issue in that case.
Powered by blists - more mailing lists