[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Zjqs5G_f2DCfhE62@LeoBras>
Date: Tue, 7 May 2024 19:36:20 -0300
From: Leonardo Bras <leobras@...hat.com>
To: Sean Christopherson <seanjc@...gle.com>
Cc: Leonardo Bras <leobras@...hat.com>,
Marcelo Tosatti <mtosatti@...hat.com>,
"Paul E. McKenney" <paulmck@...nel.org>,
Paolo Bonzini <pbonzini@...hat.com>,
Frederic Weisbecker <frederic@...nel.org>,
Neeraj Upadhyay <quic_neeraju@...cinc.com>,
Joel Fernandes <joel@...lfernandes.org>,
Josh Triplett <josh@...htriplett.org>,
Boqun Feng <boqun.feng@...il.com>,
Steven Rostedt <rostedt@...dmis.org>,
Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
Lai Jiangshan <jiangshanlai@...il.com>,
Zqiang <qiang.zhang1211@...il.com>,
kvm@...r.kernel.org,
linux-kernel@...r.kernel.org,
rcu@...r.kernel.org
Subject: Re: [RFC PATCH v1 0/2] Avoid rcu_core() if CPU just left guest vcpu
On Tue, May 07, 2024 at 11:05:55AM -0700, Sean Christopherson wrote:
> On Mon, May 06, 2024, Marcelo Tosatti wrote:
> > On Fri, May 03, 2024 at 05:44:22PM -0300, Leonardo Bras wrote:
> > > > And that race exists in general, i.e. any IRQ that arrives just as the idle task
> > > > is being scheduled in will unnecessarily wakeup rcuc.
> > >
> > > That's a race could be solved with the timeout (snapshot) solution, if we
> > > don't zero last_guest_exit on kvm_sched_out(), right?
> >
> > Yes.
>
> And if KVM doesn't zero last_guest_exit on kvm_sched_out(), then we're right back
> in the situation where RCU can get false positives (see below).
>
> > > > > > > /* Is the RCU core waiting for a quiescent state from this CPU? */
> > > > > > >
> > > > > > > The problem is:
> > > > > > >
> > > > > > > 1) You should only set that flag, in the VM-entry path, after the point
> > > > > > > where no use of RCU is made: close to guest_state_enter_irqoff call.
> > > > > >
> > > > > > Why? As established above, KVM essentially has 1 second to enter the guest after
> > > > > > setting in_guest_run_loop (or whatever we call it). In the vast majority of cases,
> > > > > > the time before KVM enters the guest can probably be measured in microseconds.
> > > > >
> > > > > OK.
> > > > >
> > > > > > Snapshotting the exit time has the exact same problem of depending on KVM to
> > > > > > re-enter the guest soon-ish, so I don't understand why this would be considered
> > > > > > a problem with a flag to note the CPU is in KVM's run loop, but not with a
> > > > > > snapshot to say the CPU recently exited a KVM guest.
> > > > >
> > > > > See the race above.
> > > >
> > > > Ya, but if kvm_last_guest_exit is zeroed in kvm_sched_out(), then the snapshot
> > > > approach ends up with the same race. And not zeroing kvm_last_guest_exit is
> > > > arguably much more problematic as encountering a false positive doesn't require
> > > > hitting a small window.
> > >
> > > For the false positive (only on nohz_full) the maximum delay for the
> > > rcu_core() to be run would be 1s, and that would be in case we don't
> > > schedule out for some userspace task or idle thread, in which case we have
> > > a quiescent state without the need of rcu_core().
> > >
> > > Now, for not being an userspace nor idle thread, it would need to be one or
> > > more kernel threads, which I suppose aren't usually many, nor usually take
> > > that long for completing, if we consider to be running on an isolated
> > > (nohz_full) cpu.
> > >
> > > So, for the kvm_sched_out() case, I don't actually think we are
> > > statistically introducing that much of a delay in the RCU mechanism.
> > >
> > > (I may be missing some point, though)
>
> My point is that if kvm_last_guest_exit is left as-is on kvm_sched_out() and
> vcpu_put(), then from a kernel/RCU safety perspective there is no meaningful
> difference between KVM setting kvm_last_guest_exit and userspace being allowed
> to mark a task as being exempt from being preempted by rcuc. Userspace can
> simply do KVM_RUN once to gain exemption from rcuc until the 1 second timeout
> expires.
Oh, I see. Your concern is that an user can explore this to purposely
explore/slowdown the RCU mechanism on nohz_full isolated CPUs. Is that
it?
Even in this case, KVM_RUN would need to run every second, which would
cause a quiescent state every second, and move other CPUs forward in RCU.
I don't get how this could be explored. I mean, running idle tasks and
userspace tasks would already cause a quiescent state, making this useless
for this purpose. So the user would need to be willing to run kernel
threads in the meantime between KVM_RUNs, right?
Maybe this could be relevant on the scenario:
"I want the other users of this machine to experience slowdown in their
processes".
But this this is possible to reproduce by actually running a busy VM in the
cpu anyway, even in the context_tracking solution, right?
I may have missed your point here. :/
Could you help me understand it, please?
Thanks!
Leo
>
> And if KVM does zero kvm_last_guest_exit on kvm_sched_out()/vcpu_put(), then the
> approach has the exact same window as my in_guest_run_loop idea, i.e. rcuc can be
> unnecessarily awakened in the time between KVM puts the vCPU and the CPU exits to
> userspace.
>
Powered by blists - more mailing lists