[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <b56c44e1-644b-4e5b-a518-9a9737d0a376@paulmck-laptop>
Date: Thu, 8 Jan 2026 17:55:36 -0800
From: "Paul E. McKenney" <paulmck@...nel.org>
To: Joel Fernandes <joelagnelf@...dia.com>
Cc: Steven Rostedt <rostedt@...dmis.org>,
Joel Fernandes <joel@...lfernandes.org>,
linux-kernel@...r.kernel.org,
Frederic Weisbecker <frederic@...nel.org>,
Neeraj Upadhyay <neeraj.upadhyay@...nel.org>,
Josh Triplett <josh@...htriplett.org>,
Boqun Feng <boqun.feng@...il.com>,
Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
Lai Jiangshan <jiangshanlai@...il.com>,
Zqiang <qiang.zhang@...ux.dev>, Uladzislau Rezki <urezki@...il.com>,
rcu@...r.kernel.org
Subject: Re: [PATCH RFC 00/14] rcu: Reduce rnp->lock contention with per-CPU
blocked task lists
On Tue, Jan 06, 2026 at 03:49:07PM -0500, Joel Fernandes wrote:
>
>
> On 1/6/2026 3:35 PM, Paul E. McKenney wrote:
> >>>> About the deferred-preemption, I believe Steven Rostedt at one point was looking
> >>>> at that for VMs, but that effort stalled as Peter is concerned about doing that
> >>>> would mess up the scheduler. The idea (AFAIU) is to use the rseq page to
> >>>> communicate locking information between vCPU threads and the host and then let
> >>>> the host avoid vCPU preemption - but the scheduler needs to do something with
> >>>> that information. Otherwise, it's no use.
> >>> Has deferred preemption for userspace locking also stalled? If not,
> >>> then the scheduler's support for userspace should apply directly to
> >>> guest OSes, right?
> >> No, the user space deferred preemption is still moving along nicely (I
> >> believe Thomas has completed most of it). The issue here is that the
> >> deferred happens before going back to user space. That's a different
> >> location than going back to the guest. The logic needs to be in that path
> >> too.
> >
> > OK, got it, thank you!
>
> There's also the challenge of sharing the locking information with the guest
> even when there is *no contention*. KVM being unaware of lock critical sections
> in the VM-exit path. Then after that wiring it up with the deffered preemption
> infra and moving beyond the 50 micro second limits. If we VM exited and then
> made a decision, I think we are easily going to blow past 50 micro seconds anyway.
Yes, the VM-exit path would need to do its part. Could the 50 microseconds
be measured up to but not including the VM exit?
> But again to clarify, I didn't mean to use vCPU preemption as the driving
> usecase for this.. but I ran into it when I wrote a benchmark to see how RCU
> behaves in a VM.
Me, I am just trying to keep the complexity down to a dull roar.
So please do not take my pushback personally. "Just doing my job."
Thanx, Paul
Powered by blists - more mailing lists