lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sat, 8 May 2021 10:31:25 +0800
From:   chi wu <wuchi.zero@...il.com>
To:     akpm@...ux-foundation.org
Cc:     linux-mm@...ck.org, linux-kernel@...r.kernel.org, tj@...nel.org
Subject: Re: [PATCH] mm/page-writeback: Fix performance when BDI's share of
 ratio is 0.

Ping...

Chi Wu <wuchi.zero@...il.com> 于2021年4月29日周四 上午6:51写道:
>
> Fix performance when BDI's share of ratio is 0.
>
> The issue is similar to commit 74d369443325 ("writeback: Fix
> performance regression in wb_over_bg_thresh()").
>
> Balance_dirty_pages and the writeback worker will also disagree on
> whether writeback when a BDI uses BDI_CAP_STRICTLIMIT and BDI's share
> of the thresh ratio is zero.
>
> For example, A thread on cpu0 writes 32 pages and then
> balance_dirty_pages, it will wake up background writeback and pauses
> because wb_dirty > wb->wb_thresh = 0 (share of thresh ratio is zero).
> A thread may runs on cpu0 again because scheduler prefers pre_cpu.
> Then writeback worker may runs on other cpus(1,2..) which causes the
> value of wb_stat(wb, WB_RECLAIMABLE) in wb_over_bg_thresh is 0 and does
> not writeback and returns.
>
> Thus, balance_dirty_pages keeps looping, sleeping and then waking up the
> worker who will do nothing. It remains stuck in this state until the
> writeback worker hit the right dirty cpu or the dirty pages expire.
>
> The fix that we should get the wb_stat_sum radically when thresh is low.
>
> Signed-off-by: Chi Wu <wuchi.zero@...il.com>
> ---
>  mm/page-writeback.c | 20 ++++++++++++++++----
>  1 file changed, 16 insertions(+), 4 deletions(-)
>
> diff --git a/mm/page-writeback.c b/mm/page-writeback.c
> index 0062d5c57d41..bd7052295246 100644
> --- a/mm/page-writeback.c
> +++ b/mm/page-writeback.c
> @@ -1945,6 +1945,8 @@ bool wb_over_bg_thresh(struct bdi_writeback *wb)
>         struct dirty_throttle_control * const gdtc = &gdtc_stor;
>         struct dirty_throttle_control * const mdtc = mdtc_valid(&mdtc_stor) ?
>                                                      &mdtc_stor : NULL;
> +       unsigned long reclaimable;
> +       unsigned long thresh;
>
>         /*
>          * Similar to balance_dirty_pages() but ignores pages being written
> @@ -1957,8 +1959,13 @@ bool wb_over_bg_thresh(struct bdi_writeback *wb)
>         if (gdtc->dirty > gdtc->bg_thresh)
>                 return true;
>
> -       if (wb_stat(wb, WB_RECLAIMABLE) >
> -           wb_calc_thresh(gdtc->wb, gdtc->bg_thresh))
> +       thresh = wb_calc_thresh(gdtc->wb, gdtc->bg_thresh);
> +       if (thresh < 2 * wb_stat_error())
> +               reclaimable = wb_stat_sum(wb, WB_RECLAIMABLE);
> +       else
> +               reclaimable = wb_stat(wb, WB_RECLAIMABLE);
> +
> +       if (reclaimable > thresh)
>                 return true;
>
>         if (mdtc) {
> @@ -1972,8 +1979,13 @@ bool wb_over_bg_thresh(struct bdi_writeback *wb)
>                 if (mdtc->dirty > mdtc->bg_thresh)
>                         return true;
>
> -               if (wb_stat(wb, WB_RECLAIMABLE) >
> -                   wb_calc_thresh(mdtc->wb, mdtc->bg_thresh))
> +               thresh = wb_calc_thresh(mdtc->wb, mdtc->bg_thresh);
> +               if (thresh < 2 * wb_stat_error())
> +                       reclaimable = wb_stat_sum(wb, WB_RECLAIMABLE);
> +               else
> +                       reclaimable = wb_stat(wb, WB_RECLAIMABLE);
> +
> +               if (reclaimable > thresh)
>                         return true;
>         }
>
> --
> 2.17.1
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ