[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <SN6PR2101MB11202EAE0D5C623EC0CF8273DC5B0@SN6PR2101MB1120.namprd21.prod.outlook.com>
Date: Tue, 10 Jul 2018 21:29:24 +0000
From: "Michael Kelley (EOSG)" <Michael.H.Kelley@...rosoft.com>
To: Tianyu Lan <Tianyu.Lan@...rosoft.com>
CC: KY Srinivasan <kys@...rosoft.com>,
Haiyang Zhang <haiyangz@...rosoft.com>,
Stephen Hemminger <sthemmin@...rosoft.com>,
"tglx@...utronix.de" <tglx@...utronix.de>,
"mingo@...hat.com" <mingo@...hat.com>,
"hpa@...or.com" <hpa@...or.com>, "x86@...nel.org" <x86@...nel.org>,
"pbonzini@...hat.com" <pbonzini@...hat.com>,
"rkrcmar@...hat.com" <rkrcmar@...hat.com>,
"devel@...uxdriverproject.org" <devel@...uxdriverproject.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"kvm@...r.kernel.org" <kvm@...r.kernel.org>,
"vkuznets@...hat.com" <vkuznets@...hat.com>
Subject: RE: [PATCH V2 1/5] X86/Hyper-V: Add flush
HvFlushGuestPhysicalAddressSpace hypercall support
From: Tianyu Lan <Tianyu.Lan@...rosoft.com> Monday, July 9, 2018 2:03 AM
> Hyper-V supports a pv hypercall HvFlushGuestPhysicalAddressSpace to
> flush nested VM address space mapping in l1 hypervisor and it's to
> reduce overhead of flushing ept tlb among vcpus. This patch is to
> implement it.
>
> Signed-off-by: Lan Tianyu <Tianyu.Lan@...rosoft.com>
> ---
> arch/x86/hyperv/Makefile | 2 +-
> arch/x86/hyperv/nested.c | 64 ++++++++++++++++++++++++++++++++++++++
> arch/x86/include/asm/hyperv-tlfs.h | 8 +++++
> arch/x86/include/asm/mshyperv.h | 2 ++
> 4 files changed, 75 insertions(+), 1 deletion(-)
> create mode 100644 arch/x86/hyperv/nested.c
> +#include <linux/types.h>
> +#include <asm/hyperv-tlfs.h>
> +#include <asm/mshyperv.h>
> +#include <asm/tlbflush.h>
> +
> +int hyperv_flush_guest_mapping(u64 as)
> +{
> + struct hv_guest_mapping_flush **flush_pcpu;
> + struct hv_guest_mapping_flush *flush;
> + u64 status;
> + unsigned long flags;
> + int ret = -EFAULT;
> +
> + if (!hv_hypercall_pg)
> + goto fault;
> +
> + local_irq_save(flags);
> +
> + flush_pcpu = (struct hv_guest_mapping_flush **)
> + this_cpu_ptr(hyperv_pcpu_input_arg);
> +
> + flush = *flush_pcpu;
> +
> + if (unlikely(!flush)) {
> + local_irq_restore(flags);
> + goto fault;
> + }
> +
> + flush->address_space = as;
> + flush->flags = 0;
> +
> + status = hv_do_hypercall(HVCALL_FLUSH_GUEST_PHYSICAL_ADDRESS_SPACE,
> + flush, NULL);
Did you consider using a "fast" hypercall? Unless there's some reason I'm
not aware of, a "fast" hypercall would be perfect here as there are 16 bytes
of input and no output. Vitaly recently added hv_do_fast_hypercall16()
in the linux-next tree. See __send_ipi_mask() in hv_apic.c in linux-next
for an example of usage. With a fast hypercall, you don't need the code for
getting the per-cpu input arg or the code for local irq save/restore, so the
code that is left is a lot faster and simpler.
Michael
> + local_irq_restore(flags);
> +
> + if (!(status & HV_HYPERCALL_RESULT_MASK))
> + ret = 0;
> +
> +fault:
> + return ret;
> +}
> +EXPORT_SYMBOL_GPL(hyperv_flush_guest_mapping);
Powered by blists - more mailing lists