lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 23 Feb 2023 11:54:53 -0300
From:   Marcelo Tosatti <mtosatti@...hat.com>
To:     Christoph Lameter <cl@...ux.com>,
        Andrew Morton <akpm@...ux-foundation.org>
Cc:     Aaron Tomlin <atomlin@...mlin.com>,
        Frederic Weisbecker <frederic@...nel.org>,
        Andrew Morton <akpm@...ux-foundation.org>,
        linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [PATCH v2 00/11] fold per-CPU vmstats remotely

On Thu, Feb 09, 2023 at 12:01:50PM -0300, Marcelo Tosatti wrote:
> This patch series addresses the following two problems:
> 
>     1. A customer provided some evidence which indicates that
>        the idle tick was stopped; albeit, CPU-specific vmstat
>        counters still remained populated.
> 
>        Thus one can only assume quiet_vmstat() was not
>        invoked on return to the idle loop. If I understand
>        correctly, I suspect this divergence might erroneously
>        prevent a reclaim attempt by kswapd. If the number of
>        zone specific free pages are below their per-cpu drift
>        value then zone_page_state_snapshot() is used to
>        compute a more accurate view of the aforementioned
>        statistic.  Thus any task blocked on the NUMA node
>        specific pfmemalloc_wait queue will be unable to make
>        significant progress via direct reclaim unless it is
>        killed after being woken up by kswapd
>        (see throttle_direct_reclaim())
> 
>     2. With a SCHED_FIFO task that busy loops on a given CPU,
>        and kworker for that CPU at SCHED_OTHER priority,
>        queuing work to sync per-vmstats will either cause that
>        work to never execute, or stalld (i.e. stall daemon)
>        boosts kworker priority which causes a latency
>        violation
> 
> By having vmstat_shepherd flush the per-CPU counters to the
> global counters from remote CPUs.
> 
> This is done using cmpxchg to manipulate the counters,
> both CPU locally (via the account functions),
> and remotely (via cpu_vm_stats_fold).
> 
> Thanks to Aaron Tomlin for diagnosing issue 1 and writing
> the initial patch series.
> 
> v2:
> - actually use LOCK CMPXCHG on counter mod/inc/dec functions
>   (Christoph Lameter)
> - use try_cmpxchg for cmpxchg loops
>   (Uros Bizjak / Matthew Wilcox)
> 
> 
>  arch/arm64/include/asm/percpu.h     |   16 ++-
>  arch/loongarch/include/asm/percpu.h |   23 ++++
>  arch/s390/include/asm/percpu.h      |    5 +
>  arch/x86/include/asm/percpu.h       |   39 ++++----
>  include/asm-generic/percpu.h        |   17 +++
>  include/linux/mmzone.h              |    3 
>  kernel/fork.c                       |    2 
>  kernel/scs.c                        |    2 
>  mm/vmstat.c                         |  424 ++++++++++++++++++++++++++++++++++++++++++++++++------------------------------------------
>  9 files changed, 308 insertions(+), 223 deletions(-)

Friendly ping, any other concern with this series?

If not, ACKed-by or Reviewed-by's would be welcome.


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ