lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Mon, 24 Jun 2024 09:31:51 +0200
From: Vlastimil Babka <vbabka@...e.cz>
To: Leonardo Bras <leobras@...hat.com>, Johannes Weiner <hannes@...xchg.org>,
 Michal Hocko <mhocko@...nel.org>, Roman Gushchin <roman.gushchin@...ux.dev>,
 Shakeel Butt <shakeel.butt@...ux.dev>, Muchun Song <muchun.song@...ux.dev>,
 Andrew Morton <akpm@...ux-foundation.org>, Christoph Lameter <cl@...ux.com>,
 Pekka Enberg <penberg@...nel.org>, David Rientjes <rientjes@...gle.com>,
 Joonsoo Kim <iamjoonsoo.kim@....com>, Hyeonggon Yoo <42.hyeyoo@...il.com>,
 Thomas Gleixner <tglx@...utronix.de>, Marcelo Tosatti <mtosatti@...hat.com>,
 Peter Zijlstra <peterz@...radead.org>, Ingo Molnar <mingo@...hat.com>,
 Will Deacon <will@...nel.org>, Waiman Long <longman@...hat.com>,
 Boqun Feng <boqun.feng@...il.com>
Cc: linux-kernel@...r.kernel.org, cgroups@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [RFC PATCH v1 0/4] Introduce QPW for per-cpu operations

Hi,

you've included tglx, which is great, but there's also LOCKING PRIMITIVES
section in MAINTAINERS so I've added folks from there in my reply.
Link to full series:
https://lore.kernel.org/all/20240622035815.569665-1-leobras@redhat.com/

On 6/22/24 5:58 AM, Leonardo Bras wrote:
> The problem:
> Some places in the kernel implement a parallel programming strategy
> consisting on local_locks() for most of the work, and some rare remote
> operations are scheduled on target cpu. This keeps cache bouncing low since
> cacheline tends to be mostly local, and avoids the cost of locks in non-RT
> kernels, even though the very few remote operations will be expensive due
> to scheduling overhead.
> 
> On the other hand, for RT workloads this can represent a problem: getting
> an important workload scheduled out to deal with remote requests is
> sure to introduce unexpected deadline misses.
> 
> The idea:
> Currently with PREEMPT_RT=y, local_locks() become per-cpu spinlocks.
> In this case, instead of scheduling work on a remote cpu, it should
> be safe to grab that remote cpu's per-cpu spinlock and run the required
> work locally. Tha major cost, which is un/locking in every local function,
> already happens in PREEMPT_RT.

I've also noticed this a while ago (likely in the context of rewriting SLUB
to use local_lock) and asked about it on IRC, and IIRC tglx wasn't fond of
the idea. But I forgot the details about why, so I'll let the the locking
experts reply...

> Also, there is no need to worry about extra cache bouncing:
> The cacheline invalidation already happens due to schedule_work_on().
> 
> This will avoid schedule_work_on(), and thus avoid scheduling-out an 
> RT workload. 
> 
> For patches 2, 3 & 4, I noticed just grabing the lock and executing
> the function locally is much faster than just scheduling it on a
> remote cpu.
> 
> Proposed solution:
> A new interface called Queue PerCPU Work (QPW), which should replace
> Work Queue in the above mentioned use case. 
> 
> If PREEMPT_RT=n, this interfaces just wraps the current 
> local_locks + WorkQueue behavior, so no expected change in runtime.
> 
> If PREEMPT_RT=y, queue_percpu_work_on(cpu,...) will lock that cpu's
> per-cpu structure and perform work on it locally. This is possible
> because on functions that can be used for performing remote work on
> remote per-cpu structures, the local_lock (which is already
> a this_cpu spinlock()), will be replaced by a qpw_spinlock(), which
> is able to get the per_cpu spinlock() for the cpu passed as parameter.
> 
> Patch 1 implements QPW interface, and patches 2, 3 & 4 replaces the
> current local_lock + WorkQueue interface by the QPW interface in
> swap, memcontrol & slub interface.
> 
> Please let me know what you think on that, and please suggest
> improvements.
> 
> Thanks a lot!
> Leo
> 
> Leonardo Bras (4):
>   Introducing qpw_lock() and per-cpu queue & flush work
>   swap: apply new queue_percpu_work_on() interface
>   memcontrol: apply new queue_percpu_work_on() interface
>   slub: apply new queue_percpu_work_on() interface
> 
>  include/linux/qpw.h | 88 +++++++++++++++++++++++++++++++++++++++++++++
>  mm/memcontrol.c     | 20 ++++++-----
>  mm/slub.c           | 26 ++++++++------
>  mm/swap.c           | 26 +++++++-------
>  4 files changed, 127 insertions(+), 33 deletions(-)
>  create mode 100644 include/linux/qpw.h
> 
> 
> base-commit: 50736169ecc8387247fe6a00932852ce7b057083


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ