lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date: Wed, 29 May 2024 22:27:53 +0900
From: Takero Funaki <flintglass@...il.com>
To: Nhat Pham <nphamcs@...il.com>
Cc: Johannes Weiner <hannes@...xchg.org>, Yosry Ahmed <yosryahmed@...gle.com>, 
	Chengming Zhou <chengming.zhou@...ux.dev>, Jonathan Corbet <corbet@....net>, 
	Andrew Morton <akpm@...ux-foundation.org>, 
	Domenico Cerasuolo <cerasuolodomenico@...il.com>, linux-mm@...ck.org, linux-doc@...r.kernel.org, 
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH 3/3] mm: zswap: proactive shrinking before pool size limit
 is hit

2024年5月29日(水) 1:01 Nhat Pham <nphamcs@...ilcom>:
>
> On Mon, May 27, 2024 at 9:34 PM Takero Funaki <flintglass@...il.com> wrote:
> >
> > This patch implements proactive shrinking of zswap pool before the max
> > pool size limit is reached. This also changes zswap to accept new pages
> > while the shrinker is running.
> >
> > To prevent zswap from rejecting new pages and incurring latency when
> > zswap is full, this patch queues the global shrinker by a pool usage
> > threshold at the middle of 100% and accept_thr_percent, instead of the
> > max pool size.  The pool size will be controlled between 90% to 95% for
> > the default accept_thr_percent=90.  Since the current global shrinker
> > continues to shrink until accept_thr_percent, we do not need to maintain
> > the hysteresis variable tracking the pool limit overage in
> > zswap_store().
> >
> > Before this patch, zswap rejected pages while the shrinker is running
> > without incrementing zswap_pool_limit_hit counter. It could be a reason
> > why zswap writethrough new pages before writeback old pages.  With this
> > patch, zswap accepts new pages while shrinking, and zswap increments
> > the counter when and only when zswap rejects pages by the max pool size.
> >
> > The name of sysfs tunable accept_thr_percent is unchanged as it is still
> > the stop condition of the shrinker.
> > The respective documentation is updated to describe the new behavior.
>
> I'm a bit unsure about using this tunable. How would the user
> determine the level at which the zswap pool should be kept empty?
>
> I was actually thinking of removing this knob altogether :)
>

If we see a large pool_limit_hit, that indicates we should lower the
accept threshold to make more space proactively, to store new pages
from active processes rather than rejecting.
If not, we can set a higher accept threshold for swapin, s.t.
low-activity background processes.
That depends on one's workload, and can be tuned by the admin, I think.

> >
> > Signed-off-by: Takero Funaki <flintglass@...il.com>
> > ---
> >  Documentation/admin-guide/mm/zswap.rst | 17 +++++----
> >  mm/zswap.c                             | 49 +++++++++++++++-----------
> >  2 files changed, 37 insertions(+), 29 deletions(-)
> >
> > diff --git a/Documentation/admin-guide/mm/zswap.rst b/Documentation/admin-guide/mm/zswap.rst
> > index 3598dcd7dbe7..a1d8f167a27a 100644
> > --- a/Documentation/admin-guide/mm/zswap.rst
> > +++ b/Documentation/admin-guide/mm/zswap.rst
> > @@ -111,18 +111,17 @@ checked if it is a same-value filled page before compressing it. If true, the
> >  compressed length of the page is set to zero and the pattern or same-filled
> >  value is stored.
> >
> > -To prevent zswap from shrinking pool when zswap is full and there's a high
> > -pressure on swap (this will result in flipping pages in and out zswap pool
> > -without any real benefit but with a performance drop for the system), a
> > -special parameter has been introduced to implement a sort of hysteresis to
> > -refuse taking pages into zswap pool until it has sufficient space if the limit
> > -has been hit. To set the threshold at which zswap would start accepting pages
> > -again after it became full, use the sysfs ``accept_threshold_percent``
> > -attribute, e. g.::
> > +To prevent zswap from rejecting new pages and incurring latency when zswap is
> > +full, zswap initiates a worker called global shrinker that proactively evicts
> > +some pages from the pool to swap devices while the pool is reaching the limit.
> > +The global shrinker continues to evict pages until there is sufficient space to
> > +accept new pages. To control how many pages should remain in the pool, use the
> > +sysfs ``accept_threshold_percent`` attribute as a percentage of the max pool
> > +size, e. g.::
> >
> >         echo 80 > /sys/module/zswap/parameters/accept_threshold_percent
> >
> > -Setting this parameter to 100 will disable the hysteresis.
> > +Setting this parameter to 100 will disable the proactive shrinking.
> >
> >  Some users cannot tolerate the swapping that comes with zswap store failures
> >  and zswap writebacks. Swapping can be disabled entirely (without disabling
> > diff --git a/mm/zswap.c b/mm/zswap.c
> > index 08a6f5a6bf62..0186224be8fc 100644
> > --- a/mm/zswap.c
> > +++ b/mm/zswap.c
> > @@ -71,8 +71,6 @@ static u64 zswap_reject_kmemcache_fail;
> >
> >  /* Shrinker work queue */
> >  static struct workqueue_struct *shrink_wq;
> > -/* Pool limit was hit, we need to calm down */
> > -static bool zswap_pool_reached_full;
> >
> >  /*********************************
> >  * tunables
> > @@ -118,7 +116,10 @@ module_param_cb(zpool, &zswap_zpool_param_ops, &zswap_zpool_type, 0644);
> >  static unsigned int zswap_max_pool_percent = 20;
> >  module_param_named(max_pool_percent, zswap_max_pool_percent, uint, 0644);
> >
> > -/* The threshold for accepting new pages after the max_pool_percent was hit */
> > +/*
> > + * The percentage of pool size that the global shrinker keeps in memory.
> > + * It does not protect old pages from the dynamic shrinker.
> > + */
> >  static unsigned int zswap_accept_thr_percent = 90; /* of max pool size */
> >  module_param_named(accept_threshold_percent, zswap_accept_thr_percent,
> >                    uint, 0644);
> > @@ -487,6 +488,14 @@ static unsigned long zswap_accept_thr_pages(void)
> >         return zswap_max_pages() * zswap_accept_thr_percent / 100;
> >  }
> >
> > +/*
> > + * Returns threshold to start proactive global shrinking.
> > + */
> > +static inline unsigned long zswap_shrink_start_pages(void)
> > +{
> > +       return zswap_max_pages() * (100 - (100 - zswap_accept_thr_percent)/2) / 100;
> > +}
> > +
> >  unsigned long zswap_total_pages(void)
> >  {
> >         struct zswap_pool *pool;
> > @@ -504,21 +513,6 @@ unsigned long zswap_total_pages(void)
> >         return total;
> >  }
> >
> > -static bool zswap_check_limits(void)
> > -{
> > -       unsigned long cur_pages = zswap_total_pages();
> > -       unsigned long max_pages = zswap_max_pages();
> > -
> > -       if (cur_pages >= max_pages) {
> > -               zswap_pool_limit_hit++;
> > -               zswap_pool_reached_full = true;
> > -       } else if (zswap_pool_reached_full &&
> > -                  cur_pages <= zswap_accept_thr_pages()) {
> > -                       zswap_pool_reached_full = false;
> > -       }
> > -       return zswap_pool_reached_full;
> > -}
> > -
> >  /*********************************
> >  * param callbacks
> >  **********************************/
> > @@ -1475,6 +1469,8 @@ bool zswap_store(struct folio *folio)
> >         struct obj_cgroup *objcg = NULL;
> >         struct mem_cgroup *memcg = NULL;
> >         unsigned long value;
> > +       unsigned long cur_pages;
> > +       bool need_global_shrink = false;
> >
> >         VM_WARN_ON_ONCE(!folio_test_locked(folio));
> >         VM_WARN_ON_ONCE(!folio_test_swapcache(folio));
> > @@ -1497,8 +1493,18 @@ bool zswap_store(struct folio *folio)
> >                 mem_cgroup_put(memcg);
> >         }
> >
> > -       if (zswap_check_limits())
> > +       cur_pages = zswap_total_pages();
> > +
> > +       if (cur_pages >= zswap_max_pages()) {
> > +               zswap_pool_limit_hit++;
> > +               need_global_shrink = true;
> >                 goto reject;
> > +       }
> > +
> > +       /* schedule shrink for incoming pages */
> > +       if (cur_pages >= zswap_shrink_start_pages()
> > +                       && !work_pending(&zswap_shrink_work))
> > +               queue_work(shrink_wq, &zswap_shrink_work);
>
> I think work_pending() check here is redundant. If you look at the
> documentation, queue_work only succeeds if zswap_shrink_work is not
> already on the shrink_wq workqueue.
>
> More specifically, if you check the code, queue_work calls
> queue_work_on, which has this check:
>
> if (!test_and_set_bit(WORK_STRUCT_PENDING_BIT, work_data_bits(work)) &&
>    !clear_pending_if_disabled(work)) {
>
> This is the same bit-check as work_pending, which is defined as:
>
> #define work_pending(work) \
> test_bit(WORK_STRUCT_PENDING_BIT, work_data_bits(work))
>

Thanks for the review and info. I will remove the tests.

>
> >
> >         /* allocate entry */
> >         entry = zswap_entry_cache_alloc(GFP_KERNEL, folio_nid(folio));
> > @@ -1541,6 +1547,9 @@ bool zswap_store(struct folio *folio)
> >
> >                 WARN_ONCE(err != -ENOMEM, "unexpected xarray error: %d\n", err);
> >                 zswap_reject_alloc_fail++;
> > +
> > +               /* reduce entry in array */
> > +               need_global_shrink = true;
> >                 goto store_failed;
> >         }
> >
> > @@ -1590,7 +1599,7 @@ bool zswap_store(struct folio *folio)
> >         zswap_entry_cache_free(entry);
> >  reject:
> >         obj_cgroup_put(objcg);
> > -       if (zswap_pool_reached_full)
> > +       if (need_global_shrink && !work_pending(&zswap_shrink_work))
> >                 queue_work(shrink_wq, &zswap_shrink_work);
> >  check_old:
> >         /*
> > --
> > 2.43.0
> >



-- 

<flintglass@...il.com>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ