[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <68c39823-6b1d-4368-bd1e-a521ade8889b@paulmck-laptop>
Date: Tue, 14 May 2024 15:54:16 -0700
From: "Paul E. McKenney" <paulmck@...nel.org>
To: Leonardo Bras Soares Passos <leobras@...hat.com>
Cc: Sean Christopherson <seanjc@...gle.com>,
Frederic Weisbecker <frederic@...nel.org>,
Paolo Bonzini <pbonzini@...hat.com>,
Marcelo Tosatti <mtosatti@...hat.com>, linux-kernel@...r.kernel.org,
kvm@...r.kernel.org
Subject: Re: [RFC PATCH 1/1] kvm: Note an RCU quiescent state on guest exit
On Mon, May 13, 2024 at 06:47:13PM -0300, Leonardo Bras Soares Passos wrote:
> On Mon, May 13, 2024 at 4:40 PM Sean Christopherson <seanjc@...gle.com> wrote:
> >
> > On Fri, May 10, 2024, Leonardo Bras wrote:
> > > As of today, KVM notes a quiescent state only in guest entry, which is good
> > > as it avoids the guest being interrupted for current RCU operations.
> > >
> > > While the guest vcpu runs, it can be interrupted by a timer IRQ that will
> > > check for any RCU operations waiting for this CPU. In case there are any of
> > > such, it invokes rcu_core() in order to sched-out the current thread and
> > > note a quiescent state.
> > >
> > > This occasional schedule work will introduce tens of microsseconds of
> > > latency, which is really bad for vcpus running latency-sensitive
> > > applications, such as real-time workloads.
> > >
> > > So, note a quiescent state in guest exit, so the interrupted guests is able
> > > to deal with any pending RCU operations before being required to invoke
> > > rcu_core(), and thus avoid the overhead of related scheduler work.
> >
> > Are there any downsides to this? E.g. extra latency or anything? KVM will note
> > a context switch on the next VM-Enter, so even if there is extra latency or
> > something, KVM will eventually take the hit in the common case no matter what.
> > But I know some setups are sensitive to handling select VM-Exits as soon as possible.
> >
> > I ask mainly because it seems like a no brainer to me to have both VM-Entry and
> > VM-Exit note the context switch, which begs the question of why KVM isn't already
> > doing that. I assume it was just oversight when commit 126a6a542446 ("kvm,rcu,nohz:
> > use RCU extended quiescent state when running KVM guest") handled the VM-Entry
> > case?
>
> I don't know, by the lore I see it happening in guest entry since the
> first time it was introduced at
> https://lore.kernel.org/all/1423167832-17609-5-git-send-email-riel@redhat.com/
>
> Noting a quiescent state is cheap, but it may cost a few accesses to
> possibly non-local cachelines. (Not an expert in this, Paul please let
> me know if I got it wrong).
Yes, it is cheap, especially if interrupts are already disabled.
(As in the scheduler asks RCU to do the same amount of work on its
context-switch fastpath.)
> I don't have a historic context on why it was just implemented on
> guest_entry, but it would make sense when we don't worry about latency
> to take the entry-only approach:
> - It saves the overhead of calling rcu_virt_note_context_switch()
> twice per guest entry in the loop
> - KVM will probably run guest entry soon after guest exit (in loop),
> so there is no need to run it twice
> - Eventually running rcu_core() may be cheaper than noting quiescent
> state every guest entry/exit cycle
>
> Upsides of the new strategy:
> - Noting a quiescent state in guest exit avoids calling rcu_core() if
> there was a grace period request while guest was running, and timer
> interrupt hits the cpu.
> - If the loop re-enter quickly there is a high chance that guest
> entry's rcu_virt_note_context_switch() will be fast (local cacheline)
> as there is low probability of a grace period request happening
> between exit & re-entry.
> - It allows us to use the rcu patience strategy to avoid rcu_core()
> running if any grace period request happens between guest exit and
> guest re-entry, which is very important for low latency workloads
> running on guests as it reduces maximum latency in long runs.
>
> What do you think?
Try both on the workload of interest with appropriate tracing and
see what happens? The hardware's opinion overrides mine. ;-)
Thanx, Paul
Powered by blists - more mailing lists