[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CY4PR21MB077329ED0E195CDF557A09DEDC1B0@CY4PR21MB0773.namprd21.prod.outlook.com>
Date: Wed, 12 Sep 2018 00:22:24 +0000
From: "Michael Kelley (EOSG)" <Michael.H.Kelley@...rosoft.com>
To: Tianyu Lan <Tianyu.Lan@...rosoft.com>
CC: KY Srinivasan <kys@...rosoft.com>,
Haiyang Zhang <haiyangz@...rosoft.com>,
Stephen Hemminger <sthemmin@...rosoft.com>,
"tglx@...utronix.de" <tglx@...utronix.de>,
"mingo@...hat.com" <mingo@...hat.com>,
"hpa@...or.com" <hpa@...or.com>, "x86@...nel.org" <x86@...nel.org>,
"pbonzini@...hat.com" <pbonzini@...hat.com>,
"rkrcmar@...hat.com" <rkrcmar@...hat.com>,
"devel@...uxdriverproject.org" <devel@...uxdriverproject.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"kvm@...r.kernel.org" <kvm@...r.kernel.org>,
vkuznets <vkuznets@...hat.com>,
Jork Loeser <Jork.Loeser@...rosoft.com>
Subject: RE: [PATCH 10/13] x86/hyper-v: Add HvFlushGuestAddressList hypercall
support
From: Tianyu Lan Sent: Monday, September 10, 2018 1:39 AM
> +
> +int hyperv_flush_guest_mapping_range(u64 as, struct kvm_tlb_range *range)
I'm really concerned about defining the Hyper-V function to flush
guest mappings in terms of a KVM struct definition. Your patch puts
this function in arch/x86/hyperv/nested.c. I haven't investigated all
the details, but on its face this approach seems like it would cause
trouble in the long run, and it doesn't support the case of a
hypervisor other than KVM running at L1.
I know that KVM code has taken a dependency on Hyper-V types and
code, but that's because KVM is emulating a lot of Hyper-V functionality
and it's taking advantage of Hyper-V enlightenments. Is there a top
level reason I haven't thought of for Hyper-V code to take a
dependency on KVM definitions? I would think we want Hyper-V
code to be generic, using Hyper-V data structure definitions. Then in
keeping with what's already been done, KVM code would use those
definitions where it needs to make calls to Hyper-V code.
> +{
> + struct kvm_mmu_page *sp;
> + struct hv_guest_mapping_flush_list **flush_pcpu;
> + struct hv_guest_mapping_flush_list *flush;
> + u64 status = 0;
> + unsigned long flags;
> + int ret = -ENOTSUPP;
> + int gpa_n = 0;
> +
> + if (!hv_hypercall_pg)
> + goto fault;
> +
> + local_irq_save(flags);
> +
> + flush_pcpu = (struct hv_guest_mapping_flush_list **)
> + this_cpu_ptr(hyperv_pcpu_input_arg);
> +
> + flush = *flush_pcpu;
> + if (unlikely(!flush)) {
> + local_irq_restore(flags);
> + goto fault;
> + }
> +
> + flush->address_space = as;
> + flush->flags = 0;
> +
> + if (!range->flush_list) {
> + gpa_n = fill_flush_list(flush->gpa_list, gpa_n,
> + range->start_gfn, range->end_gfn);
> + } else {
> + list_for_each_entry(sp, range->flush_list,
> + flush_link) {
> + u64 end_gfn = sp->gfn +
> + KVM_PAGES_PER_HPAGE(sp->role.level) - 1;
> + gpa_n = fill_flush_list(flush->gpa_list, gpa_n,
> + sp->gfn, end_gfn);
> + }
Per the previous comment, if this loop really needs to walk a KVM
data structure, look for a different way to organize things so that
the handling of KVM-specific data structures is in code that’s part
of KVM, rather than in Hyper-V code.
Michael
Powered by blists - more mailing lists