[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <AF395934-3758-4A1C-A47A-C51F01A83A8A@nutanix.com>
Date: Thu, 1 Dec 2022 14:57:30 +0000
From: Jon Kohler <jon@...anix.com>
To: Chao Gao <chao.gao@...el.com>
CC: Sean Christopherson <seanjc@...gle.com>,
Paolo Bonzini <pbonzini@...hat.com>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
Dave Hansen <dave.hansen@...ux.intel.com>,
X86 ML <x86@...nel.org>, "H. Peter Anvin" <hpa@...or.com>,
"kvm @ vger . kernel . org" <kvm@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] KVM: X86: set EXITING_GUEST_MODE as soon as vCPU exits
> On Nov 30, 2022, at 11:55 PM, Chao Gao <chao.gao@...el.com> wrote:
>
> On Wed, Nov 30, 2022 at 02:07:57PM +0000, Jon Kohler wrote:
>>
>>
>>> On Nov 30, 2022, at 1:29 AM, Chao Gao <chao.gao@...el.com> wrote:
>>>
>>
>> Chao while I’ve got you here, I was inspired to tune up the software side here based
>> on the VTD suppress notifications change we had been talking about. Any chance
>> we could get the v4 of that? Seemed like it was almost done, yea? Would love to
>
> I didn't post a new version because there is no feedback on v3. But
> considering there is a mistake in v3, I will fix it and post v4.
Ok Thanks! Looking forward to that. In between that patch and this one, should be a great
combined impact. Any chance you can apply my patch and yours together and see how
it works? I’d imagine it isn’t as applicable with IPI-v, but it’d still be interesting to see
how the test numbers work out with your benchmark with/without IPI-v to see if your
test sees a speedup here too.
>
>> get our hands on that to help accelerate the VTD path.
>>
>>
>>> On Tue, Nov 29, 2022 at 01:22:25PM -0500, Jon Kohler wrote:
>>>> @@ -7031,6 +7042,18 @@ void noinstr vmx_update_host_rsp(struct vcpu_vmx *vmx, unsigned long host_rsp)
>>>> void noinstr vmx_spec_ctrl_restore_host(struct vcpu_vmx *vmx,
>>>> unsigned int flags)
>>>> {
>>>> + struct kvm_vcpu *vcpu = &vmx->vcpu;
>>>> +
>>>> + /* Optimize IPI reduction by setting mode immediately after vmexit
>>>> + * without a memmory barrier as this as not paired anywhere. vcpu->mode
>>>> + * is will be set to OUTSIDE_GUEST_MODE in x86 common code with a memory
>>>> + * barrier, after the host is done fully restoring various host states.
>>>> + * Since the rdmsr and wrmsr below are expensive, this must be done
>>>> + * first, so that the IPI suppression window covers the time dealing
>>>> + * with fixing up SPEC_CTRL.
>>>> + */
>>>> + vcpu->mode = EXITING_GUEST_MODE;
>>>
>>> Does this break kvm_vcpu_kick()? IIUC, kvm_vcpu_kick() does nothing if
>>> vcpu->mode is already EXITING_GUEST_MODE, expecting the vCPU will exit
>>> guest mode. But ...
>>
>> IIRC that’d only be a problem for fast path exits that reenter guest (like TSC Deadline)
>> everything else *will* eventually exit out to kernel mode to pickup whatever other
>> requests may be pending. In this sense, this patch is actually even better for kick
>> because we will send incrementally less spurious kicks.
>
> Yes. I agree.
>
>>
>> Even then, for fast path reentry exits, a guest is likely to exit all the way out eventually
>> for something else soon enough, so worst case something gets a wee bit more delayed
>> than it should. Small price to pay for clawing back cycles on the IPI send side I think.
>
> Thanks for above clarification. On second thoughts, for fastpath, there is a
> call of kvm_vcpu_exit_request() before re-entry. This call guarantees that
> vCPUs will exit guest mode if any request pending. So, this change actually
> won't lead to a delay in handling pending events.
Ok thanks. I know this week tends to be a slow(er) week in the US coming back from the
Holidays, so will wait for additional review / comments here
Powered by blists - more mailing lists