[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <18288874-e778-4f7f-2fd5-f03624717e9c@de.ibm.com>
Date: Wed, 11 Jul 2018 23:39:10 +0200
From: Christian Borntraeger <borntraeger@...ibm.com>
To: paulmck@...ux.vnet.ibm.com
Cc: David Woodhouse <dwmw2@...radead.org>, peterz@...radead.org,
mhillenb@...zon.de, linux-kernel@...r.kernel.org,
kvm@...r.kernel.org
Subject: Re: [PATCH v2] kvm/x86: Inform RCU of quiescent state when entering
guest mode
On 07/11/2018 11:32 PM, Paul E. McKenney wrote:
> On Wed, Jul 11, 2018 at 11:11:19PM +0200, Christian Borntraeger wrote:
>>
>>
>> On 07/11/2018 10:27 PM, Paul E. McKenney wrote:
>>> On Wed, Jul 11, 2018 at 08:39:36PM +0200, Christian Borntraeger wrote:
>>>>
>>>>
>>>> On 07/11/2018 08:36 PM, Paul E. McKenney wrote:
>>>>> On Wed, Jul 11, 2018 at 11:20:53AM -0700, Paul E. McKenney wrote:
>>>>>> On Wed, Jul 11, 2018 at 07:01:01PM +0100, David Woodhouse wrote:
>>>>>>> From: David Woodhouse <dwmw@...zon.co.uk>
>>>>>>>
>>>>>>> RCU can spend long periods of time waiting for a CPU which is actually in
>>>>>>> KVM guest mode, entirely pointlessly. Treat it like the idle and userspace
>>>>>>> modes, and don't wait for it.
>>>>>>>
>>>>>>> Signed-off-by: David Woodhouse <dwmw@...zon.co.uk>
>>>>>>
>>>>>> And idiot here forgot about some of the debugging code in RCU's dyntick-idle
>>>>>> code. I will reply with a fixed patch.
>>>>>>
>>>>>> The code below works just fine as long as you don't enable CONFIG_RCU_EQS_DEBUG,
>>>>>> so should be OK for testing, just not for mainline.
>>>>>
>>>>> And here is the updated code that allegedly avoids splatting when run with
>>>>> CONFIG_RCU_EQS_DEBUG.
>>>>>
>>>>> Thoughts?
>>>>>
>>>>> Thanx, Paul
>>>>>
>>>>> ------------------------------------------------------------------------
>>>>>
>>>>> commit 12cd59e49cf734f907f44b696e2c6e4b46a291c3
>>>>> Author: David Woodhouse <dwmw@...zon.co.uk>
>>>>> Date: Wed Jul 11 19:01:01 2018 +0100
>>>>>
>>>>> kvm/x86: Inform RCU of quiescent state when entering guest mode
>>>>>
>>>>> RCU can spend long periods of time waiting for a CPU which is actually in
>>>>> KVM guest mode, entirely pointlessly. Treat it like the idle and userspace
>>>>> modes, and don't wait for it.
>>>>>
>>>>> Signed-off-by: David Woodhouse <dwmw@...zon.co.uk>
>>>>> Signed-off-by: Paul E. McKenney <paulmck@...ux.vnet.ibm.com>
>>>>> [ paulmck: Adjust to avoid bad advice I gave to dwmw, avoid WARN_ON()s. ]
>>>>>
>>>>> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
>>>>> index 0046aa70205a..b0c82f70afa7 100644
>>>>> --- a/arch/x86/kvm/x86.c
>>>>> +++ b/arch/x86/kvm/x86.c
>>>>> @@ -7458,7 +7458,9 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
>>>>> vcpu->arch.switch_db_regs &= ~KVM_DEBUGREG_RELOAD;
>>>>> }
>>>>>
>>>>> + rcu_kvm_enter();
>>>>> kvm_x86_ops->run(vcpu);
>>>>> + rcu_kvm_exit();
>>>>
>>>> As indicated in my other mail. This is supposed to be handled in the guest_enter|exit_ calls around
>>>> the run function. This would also handle other architectures. So if the guest_enter_irqoff code is
>>>> not good enough, we should rather fix that instead of adding another rcu hint.
>>>
>>> Something like this, on top of the earlier patch? I am not at all
>>> confident of this patch because there might be other entry/exit
>>> paths I am missing. Plus there might be RCU uses on the arch-specific
>>> patch to and from the guest OS.
>>>
>>> Thoughts?
>>>
>>
>> If you instrment guest_enter/exit, you should cover all cases and all architectures as far
>> as I can tell. FWIW, we did this rcu_note thing back then actually handling this particular
>> case of long running guests blocking rcu for many seconds. And I am pretty sure that
>> this did help back then.
>
> And my second patch on the email you replied to replaced the only call
> to rcu_virt_note_context_switch(). So maybe it covers what it needs to,
> but yes, there might well be things I missed. Let's see what David
> comes up with.
>
> What changed was RCU's reactions to longish grace periods. It used to
> be very aggressive about forcing the scheduler to do otherwise-unneeded
> context switches, which became a problem somewhere between v4.9 and v4.15.
> I therefore reduced the number of such context switches, which in turn
> caused KVM to tell RCU about quiescent states way too infrequently.
You talk about
commit bcbfdd01dce5556a952fae84ef16fd0f12525e7b
rcu: Make non-preemptive schedule be Tasks RCU quiescent state
correct? In fact, then whatever (properly sent) patch comes up should contain
a fixes tag.
>
> The advantage of the rcu_kvm_enter()/rcu_kvm_exit() approach is that
> it tells RCU of an extended duration in the guest, which means that
> RCU can ignore the corresponding CPU, which in turn allows the guest
> to proceed without any RCU-induced interruptions.
>
> Does that make sense, or am I missing something? I freely admit to
> much ignorance of both kvm and s390! ;-)
WIth that explanation it makes perfect sense to replace
rcu_virt_note_context_switch with rcu_kvm_enter/exit from an rcu performance
perspective. I assume that rcu_kvm_enter is not much slower than
rcu_virt_note_context_switch? Because we do call it on every guest entry/exit
which we might have plenty for ping pong I/O workload.
Powered by blists - more mailing lists