[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKEwX=M8YThH8qOdHt5TV1E4PCiw2FSv7815O3fhqXNVMt5ezg@mail.gmail.com>
Date: Wed, 6 Dec 2023 08:56:43 -0800
From: Nhat Pham <nphamcs@...il.com>
To: Yosry Ahmed <yosryahmed@...gle.com>
Cc: Chengming Zhou <chengming.zhou@...ux.dev>,
akpm@...ux-foundation.org, hannes@...xchg.org,
cerasuolodomenico@...il.com, sjenning@...hat.com,
ddstreet@...e.org, vitaly.wool@...sulko.com, mhocko@...nel.org,
roman.gushchin@...ux.dev, shakeelb@...gle.com,
muchun.song@...ux.dev, chrisl@...nel.org, linux-mm@...ck.org,
kernel-team@...a.com, linux-kernel@...r.kernel.org,
cgroups@...r.kernel.org, linux-doc@...r.kernel.org,
linux-kselftest@...r.kernel.org, shuah@...nel.org
Subject: Re: [PATCH v8 6/6] zswap: shrinks zswap pool based on memory pressure
On Tue, Dec 5, 2023 at 10:00 PM Yosry Ahmed <yosryahmed@...gle.com> wrote:
>
> [..]
> > > @@ -526,6 +582,102 @@ static struct zswap_entry *zswap_entry_find_get(struct rb_root *root,
> > > return entry;
> > > }
> > >
> > > +/*********************************
> > > +* shrinker functions
> > > +**********************************/
> > > +static enum lru_status shrink_memcg_cb(struct list_head *item, struct list_lru_one *l,
> > > + spinlock_t *lock, void *arg);
> > > +
> > > +static unsigned long zswap_shrinker_scan(struct shrinker *shrinker,
> > > + struct shrink_control *sc)
> > > +{
> > > + struct lruvec *lruvec = mem_cgroup_lruvec(sc->memcg, NODE_DATA(sc->nid));
> > > + unsigned long shrink_ret, nr_protected, lru_size;
> > > + struct zswap_pool *pool = shrinker->private_data;
> > > + bool encountered_page_in_swapcache = false;
> > > +
> > > + nr_protected =
> > > + atomic_long_read(&lruvec->zswap_lruvec_state.nr_zswap_protected);
> > > + lru_size = list_lru_shrink_count(&pool->list_lru, sc);
> > > +
> > > + /*
> > > + * Abort if the shrinker is disabled or if we are shrinking into the
> > > + * protected region.
> > > + *
> > > + * This short-circuiting is necessary because if we have too many multiple
> > > + * concurrent reclaimers getting the freeable zswap object counts at the
> > > + * same time (before any of them made reasonable progress), the total
> > > + * number of reclaimed objects might be more than the number of unprotected
> > > + * objects (i.e the reclaimers will reclaim into the protected area of the
> > > + * zswap LRU).
> > > + */
> > > + if (!zswap_shrinker_enabled || nr_protected >= lru_size - sc->nr_to_scan) {
> > > + sc->nr_scanned = 0;
> > > + return SHRINK_STOP;
> > > + }
> > > +
> > > + shrink_ret = list_lru_shrink_walk(&pool->list_lru, sc, &shrink_memcg_cb,
> > > + &encountered_page_in_swapcache);
> > > +
> > > + if (encountered_page_in_swapcache)
> > > + return SHRINK_STOP;
> > > +
> > > + return shrink_ret ? shrink_ret : SHRINK_STOP;
> > > +}
> > > +
> > > +static unsigned long zswap_shrinker_count(struct shrinker *shrinker,
> > > + struct shrink_control *sc)
> > > +{
> > > + struct zswap_pool *pool = shrinker->private_data;
> > > + struct mem_cgroup *memcg = sc->memcg;
> > > + struct lruvec *lruvec = mem_cgroup_lruvec(memcg, NODE_DATA(sc->nid));
> > > + unsigned long nr_backing, nr_stored, nr_freeable, nr_protected;
> > > +
> > > +#ifdef CONFIG_MEMCG_KMEM
> > > + cgroup_rstat_flush(memcg->css.cgroup);
> > > + nr_backing = memcg_page_state(memcg, MEMCG_ZSWAP_B) >> PAGE_SHIFT;
> > > + nr_stored = memcg_page_state(memcg, MEMCG_ZSWAPPED);
> > > +#else
> > > + /* use pool stats instead of memcg stats */
> > > + nr_backing = get_zswap_pool_size(pool) >> PAGE_SHIFT;
> > > + nr_stored = atomic_read(&pool->nr_stored);
> > > +#endif
> > > +
> > > + if (!zswap_shrinker_enabled || !nr_stored)
> > When I tested with this series, with !zswap_shrinker_enabled in the default case,
> > I found the performance is much worse than that without this patch.
> >
> > Testcase: memory.max=2G, zswap enabled, kernel build -j32 in a tmpfs directory.
> >
> > The reason seems the above cgroup_rstat_flush(), caused much rstat lock contention
> > to the zswap_store() path. And if I put the "zswap_shrinker_enabled" check above
> > the cgroup_rstat_flush(), the performance become much better.
> >
> > Maybe we can put the "zswap_shrinker_enabled" check above cgroup_rstat_flush()?
>
> Yes, we should do nothing if !zswap_shrinker_enabled. We should also
> use mem_cgroup_flush_stats() here like other places unless accuracy is
> crucial, which I doubt given that reclaim uses
> mem_cgroup_flush_stats().
Ah, good points on both suggestions. We should not do extra work for
non-user. And, this is a best-effort approximation of the memory
saving factor, so as long as it is not *too* far off I think it's
acceptable.
>
> mem_cgroup_flush_stats() has some thresholding to make sure we don't
> do flushes unnecessarily, and I have a pending series in mm-unstable
> that makes that thresholding per-memcg. Keep in mind that adding a
> call to mem_cgroup_flush_stats() will cause a conflict in mm-unstable,
> because the series there adds a memcg argument to
> mem_cgroup_flush_stats(). That should be easily amenable though, I can
> post a fixlet for my series to add the memcg argument there on top of
> users if needed.
Hmm so how should we proceed from here? How about this:
a) I can send a fixlet to move the enablement check above the stats
flushing + use mem_cgroup_flush_stats
b) Then maybe, you can send a fixlet to update this new callsite?
Does that sound reasonable?
>
> >
> > Thanks!
> >
> > > + return 0;
> > > +
> > > + nr_protected =
> > > + atomic_long_read(&lruvec->zswap_lruvec_state.nr_zswap_protected);
> > > + nr_freeable = list_lru_shrink_count(&pool->list_lru, sc);
> > > + /*
> > > + * Subtract the lru size by an estimate of the number of pages
> > > + * that should be protected.
> > > + */
> > > + nr_freeable = nr_freeable > nr_protected ? nr_freeable - nr_protected : 0;
> > > +
> > > + /*
> > > + * Scale the number of freeable pages by the memory saving factor.
> > > + * This ensures that the better zswap compresses memory, the fewer
> > > + * pages we will evict to swap (as it will otherwise incur IO for
> > > + * relatively small memory saving).
> > > + */
> > > + return mult_frac(nr_freeable, nr_backing, nr_stored);
> > > +}
> > > +
> > > +static void zswap_alloc_shrinker(struct zswap_pool *pool)
> > > +{
> > > + pool->shrinker =
> > > + shrinker_alloc(SHRINKER_NUMA_AWARE | SHRINKER_MEMCG_AWARE, "mm-zswap");
> > > + if (!pool->shrinker)
> > > + return;
> > > +
> > > + pool->shrinker->private_data = pool;
> > > + pool->shrinker->scan_objects = zswap_shrinker_scan;
> > > + pool->shrinker->count_objects = zswap_shrinker_count;
> > > + pool->shrinker->batch = 0;
> > > + pool->shrinker->seeks = DEFAULT_SEEKS;
> > > +}
> > > +
> > > /*********************************
> > > * per-cpu code
> > > **********************************/
> [..]
Powered by blists - more mailing lists