lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZnsDkUySyN7wceXL@boqun-archlinux>
Date: Tue, 25 Jun 2024 10:51:13 -0700
From: Boqun Feng <boqun.feng@...il.com>
To: Leonardo Bras <leobras@...hat.com>
Cc: Vlastimil Babka <vbabka@...e.cz>, Johannes Weiner <hannes@...xchg.org>,
	Michal Hocko <mhocko@...nel.org>,
	Roman Gushchin <roman.gushchin@...ux.dev>,
	Shakeel Butt <shakeel.butt@...ux.dev>,
	Muchun Song <muchun.song@...ux.dev>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Christoph Lameter <cl@...ux.com>, Pekka Enberg <penberg@...nel.org>,
	David Rientjes <rientjes@...gle.com>,
	Joonsoo Kim <iamjoonsoo.kim@....com>,
	Hyeonggon Yoo <42.hyeyoo@...il.com>,
	Thomas Gleixner <tglx@...utronix.de>,
	Marcelo Tosatti <mtosatti@...hat.com>,
	Peter Zijlstra <peterz@...radead.org>,
	Ingo Molnar <mingo@...hat.com>, Will Deacon <will@...nel.org>,
	Waiman Long <longman@...hat.com>, linux-kernel@...r.kernel.org,
	cgroups@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [RFC PATCH v1 0/4] Introduce QPW for per-cpu operations

On Mon, Jun 24, 2024 at 11:57:57PM -0300, Leonardo Bras wrote:
> On Mon, Jun 24, 2024 at 03:54:14PM -0700, Boqun Feng wrote:
> > On Mon, Jun 24, 2024 at 09:31:51AM +0200, Vlastimil Babka wrote:
> > > Hi,
> > > 
> > > you've included tglx, which is great, but there's also LOCKING PRIMITIVES
> > > section in MAINTAINERS so I've added folks from there in my reply.
> > 
> > Thanks!
> > 
> > > Link to full series:
> > > https://lore.kernel.org/all/20240622035815.569665-1-leobras@redhat.com/
> > > 
> > 
> > And apologies to Leonardo... I think this is a follow-up of:
> > 
> > 	https://lpc.events/event/17/contributions/1484/
> > 
> > and I did remember we had a quick chat after that which I suggested it's
> > better to change to a different name, sorry that I never found time to
> > write a proper rely to your previous seriese [1] as promised.
> > 
> > [1]: https://lore.kernel.org/lkml/20230729083737.38699-2-leobras@redhat.com/
> 
> That's correct, I commented about this in the end of above presentation.
> Don't worry, and thanks for suggesting the per-cpu naming, it was very 
> helpful on designing this solution.
> 
> > 
> > > On 6/22/24 5:58 AM, Leonardo Bras wrote:
> > > > The problem:
> > > > Some places in the kernel implement a parallel programming strategy
> > > > consisting on local_locks() for most of the work, and some rare remote
> > > > operations are scheduled on target cpu. This keeps cache bouncing low since
> > > > cacheline tends to be mostly local, and avoids the cost of locks in non-RT
> > > > kernels, even though the very few remote operations will be expensive due
> > > > to scheduling overhead.
> > > > 
> > > > On the other hand, for RT workloads this can represent a problem: getting
> > > > an important workload scheduled out to deal with remote requests is
> > > > sure to introduce unexpected deadline misses.
> > > > 
> > > > The idea:
> > > > Currently with PREEMPT_RT=y, local_locks() become per-cpu spinlocks.
> > > > In this case, instead of scheduling work on a remote cpu, it should
> > > > be safe to grab that remote cpu's per-cpu spinlock and run the required
> > > > work locally. Tha major cost, which is un/locking in every local function,
> > > > already happens in PREEMPT_RT.
> > > 
> > > I've also noticed this a while ago (likely in the context of rewriting SLUB
> > > to use local_lock) and asked about it on IRC, and IIRC tglx wasn't fond of
> > > the idea. But I forgot the details about why, so I'll let the the locking
> > > experts reply...
> > > 
> > 
> > I think it's a good idea, especially the new name is less confusing ;-)
> > So I wonder Thomas' thoughts as well.
> 
> Thanks!
> 
> > 
> > And I think a few (micro-)benchmark numbers will help.
> 
> Last year I got some numbers on how replacing local_locks with 
> spinlocks would impact memcontrol.c cache operations:
> 
> https://lore.kernel.org/all/20230125073502.743446-1-leobras@redhat.com/
> 
> tl;dr: It increased clocks spent in the most common this_cpu operations, 
> while reducing clocks spent in remote operations (drain_all_stock).
> 
> In RT case, since local locks are already spinlocks, this cost is 
> already paid, so we can get results like these:
> 
> drain_all_stock
> cpus	Upstream 	Patched		Diff (cycles)	Diff(%)
> 1	44331.10831	38978.03581	-5353.072507	-12.07520567
> 8	43992.96512	39026.76654	-4966.198572	-11.2886198
> 128	156274.6634	58053.87421	-98220.78915	-62.85138425
> 
> Upstream: Clocks to schedule work on remote CPU (performing not accounted)
> Patched:  Clocks to grab remote cpu's spinlock and perform the needed work 
> 	  locally.

This looks good as a micro-benchmark. And it answers why we need patch
#3 in this series. It'll be better if we have something similar for
patch #2 and #4.

Besides, micro-benchmarks are usually a bit artifical IMO, it's better
if we have the data to prove that your changes improve the performance
from a more global view. For example, could you find or create a use
case where flush_slab() becomes somewhat a hot path? And we can then
know the performance gain from your changes in that use case. Maybe
Vlastimil has something in his mind already? ;-)

Also keep in mind that your changes apply to RT, so a natural follow-up
question would be: will it hurt the system latency? I know litte about
this area, so I must defer this to experts.

The above concern brings another opportunity: would it make sense to use
real locks instead of queuing work on a remote CPU in the case when RT
is not needed, but CPU isolation is important? I.e. nohz_full
situations?

> 
> Do you have other suggestions to use as (micro-) benchmarking?
> 

My overall suggestion is that you do find a valuable pattern where
queuing remote work may not be the best option, but usually a real world
usage would make more sense for the extra complexity that we will pay.

Does this make sense?

Regards,
Boqun

> Thanks!
> Leo
> 
> 
> > 
> > Regards,
> > Boqun
> > 
> > > > Also, there is no need to worry about extra cache bouncing:
> > > > The cacheline invalidation already happens due to schedule_work_on().
> > > > 
> > > > This will avoid schedule_work_on(), and thus avoid scheduling-out an 
> > > > RT workload. 
> > > > 
> > > > For patches 2, 3 & 4, I noticed just grabing the lock and executing
> > > > the function locally is much faster than just scheduling it on a
> > > > remote cpu.
> > > > 
> > > > Proposed solution:
> > > > A new interface called Queue PerCPU Work (QPW), which should replace
> > > > Work Queue in the above mentioned use case. 
> > > > 
> > > > If PREEMPT_RT=n, this interfaces just wraps the current 
> > > > local_locks + WorkQueue behavior, so no expected change in runtime.
> > > > 
> > > > If PREEMPT_RT=y, queue_percpu_work_on(cpu,...) will lock that cpu's
> > > > per-cpu structure and perform work on it locally. This is possible
> > > > because on functions that can be used for performing remote work on
> > > > remote per-cpu structures, the local_lock (which is already
> > > > a this_cpu spinlock()), will be replaced by a qpw_spinlock(), which
> > > > is able to get the per_cpu spinlock() for the cpu passed as parameter.
> > > > 
> > > > Patch 1 implements QPW interface, and patches 2, 3 & 4 replaces the
> > > > current local_lock + WorkQueue interface by the QPW interface in
> > > > swap, memcontrol & slub interface.
> > > > 
> > > > Please let me know what you think on that, and please suggest
> > > > improvements.
> > > > 
> > > > Thanks a lot!
> > > > Leo
> > > > 
> > > > Leonardo Bras (4):
> > > >   Introducing qpw_lock() and per-cpu queue & flush work
> > > >   swap: apply new queue_percpu_work_on() interface
> > > >   memcontrol: apply new queue_percpu_work_on() interface
> > > >   slub: apply new queue_percpu_work_on() interface
> > > > 
> > > >  include/linux/qpw.h | 88 +++++++++++++++++++++++++++++++++++++++++++++
> > > >  mm/memcontrol.c     | 20 ++++++-----
> > > >  mm/slub.c           | 26 ++++++++------
> > > >  mm/swap.c           | 26 +++++++-------
> > > >  4 files changed, 127 insertions(+), 33 deletions(-)
> > > >  create mode 100644 include/linux/qpw.h
> > > > 
> > > > 
> > > > base-commit: 50736169ecc8387247fe6a00932852ce7b057083
> > > 
> > 
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ