lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZjU2VxZe3A9_Y7Yf@LeoBras>
Date: Fri,  3 May 2024 16:09:11 -0300
From: Leonardo Bras <leobras@...hat.com>
To: Leonardo Bras <leobras@...hat.com>
Cc: Sean Christopherson <seanjc@...gle.com>,
	Paolo Bonzini <pbonzini@...hat.com>,
	"Paul E. McKenney" <paulmck@...nel.org>,
	Frederic Weisbecker <frederic@...nel.org>,
	Neeraj Upadhyay <quic_neeraju@...cinc.com>,
	Joel Fernandes <joel@...lfernandes.org>,
	Josh Triplett <josh@...htriplett.org>,
	Boqun Feng <boqun.feng@...il.com>,
	Steven Rostedt <rostedt@...dmis.org>,
	Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
	Lai Jiangshan <jiangshanlai@...il.com>,
	Zqiang <qiang.zhang1211@...il.com>,
	Marcelo Tosatti <mtosatti@...hat.com>,
	kvm@...r.kernel.org,
	linux-kernel@...r.kernel.org,
	rcu@...r.kernel.org
Subject: Re: [RFC PATCH v1 0/2] Avoid rcu_core() if CPU just left guest vcpu

On Fri, May 03, 2024 at 03:42:38PM -0300, Leonardo Bras wrote:
> Hello Sean, Marcelo and Paul,
> 
> Thank you for your comments on this thread!
> I will try to reply some of the questions below:
> 
> (Sorry for the delay, I was OOO for a while.)
> 
> 
> On Mon, Apr 01, 2024 at 01:21:25PM -0700, Sean Christopherson wrote:
> > On Thu, Mar 28, 2024, Leonardo Bras wrote:
> > > I am dealing with a latency issue inside a KVM guest, which is caused by
> > > a sched_switch to rcuc[1].
> > > 
> > > During guest entry, kernel code will signal to RCU that current CPU was on
> > > a quiescent state, making sure no other CPU is waiting for this one.
> > > 
> > > If a vcpu just stopped running (guest_exit), and a syncronize_rcu() was
> > > issued somewhere since guest entry, there is a chance a timer interrupt
> > > will happen in that CPU, which will cause rcu_sched_clock_irq() to run.
> > > 
> > > rcu_sched_clock_irq() will check rcu_pending() which will return true,
> > > and cause invoke_rcu_core() to be called, which will (in current config)
> > > cause rcuc/N to be scheduled into the current cpu.
> > > 
> > > On rcu_pending(), I noticed we can avoid returning true (and thus invoking
> > > rcu_core()) if the current cpu is nohz_full, and the cpu came from either
> > > idle or userspace, since both are considered quiescent states.
> > > 
> > > Since this is also true to guest context, my idea to solve this latency
> > > issue by avoiding rcu_core() invocation if it was running a guest vcpu.
> > > 
> > > On the other hand, I could not find a way of reliably saying the current
> > > cpu was running a guest vcpu, so patch #1 implements a per-cpu variable
> > > for keeping the time (jiffies) of the last guest exit.
> > > 
> > > In patch #2 I compare current time to that time, and if less than a second
> > > has past, we just skip rcu_core() invocation, since there is a high chance
> > > it will just go back to the guest in a moment.
> > 
> > What's the downside if there's a false positive?
> 
> False positive being guest_exit without going back in this CPU, right?
> If so in WSC, supposing no qs happens and there is a pending request, RCU 
> will take a whole second to run again, possibly making other CPUs wait 
> this long for a synchronize_rcu.

Just to make sure it's clear:
It will wait at most 1 second, if the grace period was requested just 
before the last_guest_exit update. It will never make the grace period 
be longer than the already defined 1 second. 

That's because in the timer interrupt we have:

	if (rcu_pending())
		invoke_rcu_core();

and on rcu_pending():

	if ((user || rcu_is_cpu_rrupt_from_idle() || rcu_recent_guest_exit()) &&
	    rcu_nohz_full_cpu())
		return 0;

Meaning that even if we allow 5 seconds after recent_guest_exit, it will 
only make rcu_nohz_full_cpu() run, and it will check if the grace period is 
younger than 1 second before skipping the rcu_core() invocation.



> 
> This value (1 second) could defined in .config or as a parameter if needed, 
> but does not seem a big deal, 
> 
> > 
> > > What I know it's weird with this patch:
> > > 1 - Not sure if this is the best way of finding out if the cpu was
> > >     running a guest recently.
> > > 
> > > 2 - This per-cpu variable needs to get set at each guest_exit(), so it's
> > >     overhead, even though it's supposed to be in local cache. If that's
> > >     an issue, I would suggest having this part compiled out on 
> > >     !CONFIG_NO_HZ_FULL, but further checking each cpu for being nohz_full
> > >     enabled seems more expensive than just setting this out.
> > 
> > A per-CPU write isn't problematic, but I suspect reading jiffies will be quite
> > imprecise, e.g. it'll be a full tick "behind" on many exits.
> 
> That would not be a problem, as it would mean 1 tick less waiting in the 
> false positive WSC, and the 1s amount is plenty.

s/less/more/

> 
> > 
> > > 3 - It checks if the guest exit happened over than 1 second ago. This 1
> > >     second value was copied from rcu_nohz_full_cpu() which checks if the
> > >     grace period started over than a second ago. If this value is bad,
> > >     I have no issue changing it.
> > 
> > IMO, checking if a CPU "recently" ran a KVM vCPU is a suboptimal heuristic regardless
> > of what magic time threshold is used.  IIUC, what you want is a way to detect if
> > a CPU is likely to _run_ a KVM vCPU in the near future.
> 
> That's correct!
> 
> >  KVM can provide that
> > information with much better precision, e.g. KVM knows when when it's in the core
> > vCPU run loop.
> 
> That would not be enough.
> I need to present the application/problem to make a point:
> 
> - There is multiple  isolated physical CPU (nohz_full) on which we want to 
>   run KVM_RT vcpus, which will be running a real-time (low latency) task.
> - This task should not miss deadlines (RT), so we test the VM to make sure 
>   the maximum latency on a long run does not exceed the latency requirement
> - This vcpu will run on SCHED_FIFO, but has to run on lower priority than
>   rcuc, so we can avoid stalling other cpus.
> - There may be some scenarios where the vcpu will go back to userspace
>   (from KVM_RUN ioctl), and that does not mean it's good to interrupt the 
>   this to run other stuff (like rcuc).
> 
> Now, I understand it will cover most of our issues if we have a context 
> tracking around the vcpu_run loop, since we can use that to decide not to 
> run rcuc on the cpu if the interruption hapenned inside the loop.
> 
> But IIUC we can have a thread that "just got out of the loop" getting 
> interrupted by the timer, and asked to run rcu_core which will be bad for 
> latency.
> 
> I understand that the chance may be statistically low, but happening once 
> may be enough to crush the latency numbers.
> 
> Now, I can't think on a place to put this context trackers in kvm code that 
> would avoid the chance of rcuc running improperly, that's why the suggested 
> timeout, even though its ugly.
> 
> About the false-positive, IIUC we could reduce it if we reset the per-cpu 
> last_guest_exit on kvm_put.
> 
> > 
> > > 4 - Even though I could detect no issue, I included linux/kvm_host.h into 
> > >     rcu/tree_plugin.h, which is the first time it's getting included
> > >     outside of kvm or arch code, and can be weird.
> > 
> > Heh, kvm_host.h isn't included outside of KVM because several architectures can
> > build KVM as a module, which means referencing global KVM varibles from the kernel
> > proper won't work.
> > 
> > >     An alternative would be to create a new header for providing data for
> > >     non-kvm code.
> > 
> > I doubt a new .h or .c file is needed just for this, there's gotta be a decent
> > landing spot for a one-off variable.
> 
> You are probably right
> 
> >  E.g. I wouldn't be at all surprised if there
> > is additional usefulness in knowing if a CPU is in KVM's core run loop and thus
> > likely to do a VM-Enter in the near future, at which point you could probably make
> > a good argument for adding a flag in "struct context_tracking".  Even without a
> > separate use case, there's a good argument for adding that info to context_tracking.
> 
> For the tracking solution, makes sense :)
> Not sure if the 'timeout' alternative will be that useful outside rcu.
> 
> Thanks!
> Leo


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ