lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Mon, 13 May 2024 16:14:03 -0300
From: Marcelo Tosatti <mtosatti@...hat.com>
To: Leonardo Bras <leobras@...hat.com>
Cc: Frederic Weisbecker <frederic@...nel.org>,
	"Paul E. McKenney" <paulmck@...nel.org>,
	Paolo Bonzini <pbonzini@...hat.com>,
	Sean Christopherson <seanjc@...gle.com>,
	linux-kernel@...r.kernel.org, kvm@...r.kernel.org
Subject: Re: [RFC PATCH 1/1] kvm: Note an RCU quiescent state on guest exit

On Mon, May 13, 2024 at 12:14:32AM -0300, Leonardo Bras wrote:
> On Sun, May 12, 2024 at 06:44:23PM -0300, Marcelo Tosatti wrote:
> > On Fri, May 10, 2024 at 11:05:56PM -0300, Leonardo Bras wrote:
> > > As of today, KVM notes a quiescent state only in guest entry, which is good
> > > as it avoids the guest being interrupted for current RCU operations.
> > > 
> > > While the guest vcpu runs, it can be interrupted by a timer IRQ that will
> > > check for any RCU operations waiting for this CPU. In case there are any of
> > > such, it invokes rcu_core() in order to sched-out the current thread and
> > > note a quiescent state.
> > > 
> > > This occasional schedule work will introduce tens of microsseconds of
> > > latency, which is really bad for vcpus running latency-sensitive
> > > applications, such as real-time workloads.
> > > 
> > > So, note a quiescent state in guest exit, so the interrupted guests is able
> > > to deal with any pending RCU operations before being required to invoke
> > > rcu_core(), and thus avoid the overhead of related scheduler work.
> > 
> > This does not properly fix the current problem, as RCU work might be
> > scheduled after the VM exit, followed by a timer interrupt.
> > 
> > Correct?
> 
> Correct, for this case, check the note below:
> 
> > 
> > > 
> > > Signed-off-by: Leonardo Bras <leobras@...hat.com>
> > > ---
> > > 
> > > ps: A patch fixing this same issue was discussed in this thread:
> > > https://lore.kernel.org/all/20240328171949.743211-1-leobras@redhat.com/
> > > 
> > > Also, this can be paired with a new RCU option (rcutree.nocb_patience_delay)
> > > to avoid having invoke_rcu() being called on grace-periods starting between
> > > guest exit and the timer IRQ. This RCU option is being discussed in a
> > > sub-thread of this message:
> > > https://lore.kernel.org/all/5fd66909-1250-4a91-aa71-93cb36ed4ad5@paulmck-laptop/
> 
> ^ This one above.
> The idea is to use this rcutree.nocb_patience_delay=N :
> a new option we added on RCU that allow us to avoid invoking rcu_core() if 
> the grace_period < N miliseconds. This only works on nohz_full cpus.
> 
> So with both the current patch and the one in above link, we have the same 
> effect as we previously had with last_guest_exit, with a cherry on top: we 
> can avoid rcu_core() getting called in situations where a grace period just 
> started after going into kernel code, and a timer interrupt happened before 
> it can report quiescent state again. 
> 
> For our nohz_full vcpu thread scenario, we have:
> 
> - guest_exit note a quiescent state
> - let's say we start a grace period in the next cycle
> - If timer interrupts, it requires the grace period to be older than N 
>   miliseconds
>   - If we configure a proper value for patience, it will never reach the 
>     end of patience before going guest_entry, and thus noting a quiescent 
>     state
> 
> What do you think?

I don't fully understand all of the RCU details, but since RCU quiescent
state marking happens in IRQ disabled section, there is no chance for a
timer interrupt to conflict with the marking of quiescent state.

So seem to make sense to me.


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ