[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <c41ce53f-cf6a-2b0e-4a9c-da01839094c1@microsoft.com>
Date: Wed, 11 Jul 2018 06:01:16 +0000
From: Tianyu Lan <Tianyu.Lan@...rosoft.com>
To: "Michael Kelley (EOSG)" <Michael.H.Kelley@...rosoft.com>,
Tianyu Lan <Tianyu.Lan@...rosoft.com>
CC: KY Srinivasan <kys@...rosoft.com>,
Haiyang Zhang <haiyangz@...rosoft.com>,
Stephen Hemminger <sthemmin@...rosoft.com>,
"tglx@...utronix.de" <tglx@...utronix.de>,
"mingo@...hat.com" <mingo@...hat.com>,
"hpa@...or.com" <hpa@...or.com>, "x86@...nel.org" <x86@...nel.org>,
"pbonzini@...hat.com" <pbonzini@...hat.com>,
"rkrcmar@...hat.com" <rkrcmar@...hat.com>,
"devel@...uxdriverproject.org" <devel@...uxdriverproject.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"kvm@...r.kernel.org" <kvm@...r.kernel.org>,
"vkuznets@...hat.com" <vkuznets@...hat.com>
Subject: Re: [PATCH V2 1/5] X86/Hyper-V: Add flush
HvFlushGuestPhysicalAddressSpace hypercall support
Hi Michael:
Thanks for your review.
On 7/11/2018 5:29 AM, Michael Kelley (EOSG) wrote:
> From: Tianyu Lan <Tianyu.Lan@...rosoft.com> Monday, July 9, 2018 2:03 AM
>> Hyper-V supports a pv hypercall HvFlushGuestPhysicalAddressSpace to
>> flush nested VM address space mapping in l1 hypervisor and it's to
>> reduce overhead of flushing ept tlb among vcpus. This patch is to
>> implement it.
>>
>> Signed-off-by: Lan Tianyu <Tianyu.Lan@...rosoft.com>
>> ---
>> arch/x86/hyperv/Makefile | 2 +-
>> arch/x86/hyperv/nested.c | 64 ++++++++++++++++++++++++++++++++++++++
>> arch/x86/include/asm/hyperv-tlfs.h | 8 +++++
>> arch/x86/include/asm/mshyperv.h | 2 ++
>> 4 files changed, 75 insertions(+), 1 deletion(-)
>> create mode 100644 arch/x86/hyperv/nested.c
>> +#include <linux/types.h>
>> +#include <asm/hyperv-tlfs.h>
>> +#include <asm/mshyperv.h>
>> +#include <asm/tlbflush.h>
>> +
>> +int hyperv_flush_guest_mapping(u64 as)
>> +{
>> + struct hv_guest_mapping_flush **flush_pcpu;
>> + struct hv_guest_mapping_flush *flush;
>> + u64 status;
>> + unsigned long flags;
>> + int ret = -EFAULT;
>> +
>> + if (!hv_hypercall_pg)
>> + goto fault;
>> +
>> + local_irq_save(flags);
>> +
>> + flush_pcpu = (struct hv_guest_mapping_flush **)
>> + this_cpu_ptr(hyperv_pcpu_input_arg);
>> +
>> + flush = *flush_pcpu;
>> +
>> + if (unlikely(!flush)) {
>> + local_irq_restore(flags);
>> + goto fault;
>> + }
>> +
>> + flush->address_space = as;
>> + flush->flags = 0;
>> +
>> + status = hv_do_hypercall(HVCALL_FLUSH_GUEST_PHYSICAL_ADDRESS_SPACE,
>> + flush, NULL);
>
> Did you consider using a "fast" hypercall? Unless there's some reason I'm
> not aware of, a "fast" hypercall would be perfect here as there are 16 bytes
> of input and no output. Vitaly recently added hv_do_fast_hypercall16()
> in the linux-next tree. See __send_ipi_mask() in hv_apic.c in linux-next
> for an example of usage. With a fast hypercall, you don't need the code for
> getting the per-cpu input arg or the code for local irq save/restore, so the
> code that is left is a lot faster and simpler.
>
> Michael
>
Good suggestion. But the "fast" hypercall still is not available in
kvm-next branch and it's in the x86 tip repo. We may rework this with
"fast" hypercall in the next kernel development cycle if this patchset
is accepted in for 4.19.
>> + local_irq_restore(flags);
>> +
>> + if (!(status & HV_HYPERCALL_RESULT_MASK))
>> + ret = 0;
>> +
>> +fault:
>> + return ret;
>> +}
>> +EXPORT_SYMBOL_GPL(hyperv_flush_guest_mapping);
Powered by blists - more mailing lists