lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8d7b1107-ec1e-4c1d-989c-20c73af9c43c@nvidia.com>
Date: Tue, 6 Jan 2026 15:49:07 -0500
From: Joel Fernandes <joelagnelf@...dia.com>
To: paulmck@...nel.org, Steven Rostedt <rostedt@...dmis.org>
Cc: Joel Fernandes <joel@...lfernandes.org>, linux-kernel@...r.kernel.org,
 Frederic Weisbecker <frederic@...nel.org>,
 Neeraj Upadhyay <neeraj.upadhyay@...nel.org>,
 Josh Triplett <josh@...htriplett.org>, Boqun Feng <boqun.feng@...il.com>,
 Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
 Lai Jiangshan <jiangshanlai@...il.com>, Zqiang <qiang.zhang@...ux.dev>,
 Uladzislau Rezki <urezki@...il.com>, rcu@...r.kernel.org
Subject: Re: [PATCH RFC 00/14] rcu: Reduce rnp->lock contention with per-CPU
 blocked task lists



On 1/6/2026 3:35 PM, Paul E. McKenney wrote:
>>>> About the deferred-preemption, I believe Steven Rostedt at one point was looking
>>>> at that for VMs, but that effort stalled as Peter is concerned about doing that
>>>> would mess up the scheduler. The idea (AFAIU) is to use the rseq page to
>>>> communicate locking information between vCPU threads and the host and then let
>>>> the host avoid vCPU preemption - but the scheduler needs to do something with
>>>> that information. Otherwise, it's no use.  
>>> Has deferred preemption for userspace locking also stalled?  If not,
>>> then the scheduler's support for userspace should apply directly to
>>> guest OSes, right?
>> No, the user space deferred preemption is still moving along nicely (I
>> believe Thomas has completed most of it). The issue here is that the
>> deferred happens before going back to user space. That's a different
>> location than going back to the guest. The logic needs to be in that path
>> too.
>
> OK, got it, thank you!

There's also the challenge of sharing the locking information with the guest
even when there is *no contention*. KVM being unaware of lock critical sections
in the VM-exit path. Then after that wiring it up with the deffered preemption
infra and moving beyond the 50 micro second limits. If we VM exited and then
made a decision, I think we are easily going to blow past 50 micro seconds anyway.

But again to clarify, I didn't mean to use vCPU preemption as the driving
usecase for this.. but I ran into it when I wrote a benchmark to see how RCU
behaves in a VM.

 - Joel


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ