[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y6vXTcxDNovrmeVB@yzhao56-desk.sh.intel.com>
Date: Wed, 28 Dec 2022 13:42:37 +0800
From: Yan Zhao <yan.y.zhao@...el.com>
To: Sean Christopherson <seanjc@...gle.com>
CC: Paolo Bonzini <pbonzini@...hat.com>,
Zhenyu Wang <zhenyuw@...ux.intel.com>,
Zhi Wang <zhi.a.wang@...el.com>, <kvm@...r.kernel.org>,
<intel-gvt-dev@...ts.freedesktop.org>,
<intel-gfx@...ts.freedesktop.org>, <linux-kernel@...r.kernel.org>,
Ben Gardon <bgardon@...gle.com>
Subject: Re: [PATCH 03/27] drm/i915/gvt: Incorporate KVM memslot info into
check for 2MiB GTT entry
On Fri, Dec 23, 2022 at 12:57:15AM +0000, Sean Christopherson wrote:
> Honor KVM's max allowed page size when determining whether or not a 2MiB
> GTT shadow page can be created for the guest. Querying KVM's max allowed
> size is somewhat odd as there's no strict requirement that KVM's memslots
> and VFIO's mappings are configured with the same gfn=>hva mapping, but
Without vIOMMU, VFIO's mapping is configured with the same as KVM's
memslots, i.e. with the same gfn==>HVA mapping
> the check will be accurate if userspace wants to have a functional guest,
> and at the very least checking KVM's memslots guarantees that the entire
> 2MiB range has been exposed to the guest.
I think just check the entrie 2MiB GFN range are all within KVM memslot is
enough.
If for some reason, KVM maps a 2MiB range in 4K sizes, KVMGT can still map
it in IOMMU size in 2MiB size as long as the PFNs are continous and the
whole range is all exposed to guest.
Actually normal device passthrough with VFIO-PCI also maps GFNs in a
similar way, i.e. maps a guest visible range in as large size as
possible as long as the PFN is continous.
>
> Note, KVM may also restrict the mapping size for reasons that aren't
> relevant to KVMGT, e.g. for KVM's iTLB multi-hit workaround or if the gfn
Will iTLB multi-hit affect DMA?
AFAIK, IOMMU mappings currently never sets exec bit (and I'm told this bit is
under discussion to be removed).
> is write-tracked (KVM's write-tracking only handles writes from vCPUs).
> However, such scenarios are unlikely to occur with a well-behaved guest,
> and at worst will result in sub-optimal performance.
> Fixes: b901b252b6cf ("drm/i915/gvt: Add 2M huge gtt support")
> Signed-off-by: Sean Christopherson <seanjc@...gle.com>
> ---
> arch/x86/include/asm/kvm_page_track.h | 2 ++
> arch/x86/kvm/mmu/page_track.c | 18 ++++++++++++++++++
> drivers/gpu/drm/i915/gvt/gtt.c | 10 +++++++++-
> 3 files changed, 29 insertions(+), 1 deletion(-)
>
> diff --git a/arch/x86/include/asm/kvm_page_track.h b/arch/x86/include/asm/kvm_page_track.h
> index eb186bc57f6a..3f72c7a172fc 100644
> --- a/arch/x86/include/asm/kvm_page_track.h
> +++ b/arch/x86/include/asm/kvm_page_track.h
> @@ -51,6 +51,8 @@ void kvm_page_track_cleanup(struct kvm *kvm);
>
> bool kvm_page_track_write_tracking_enabled(struct kvm *kvm);
> int kvm_page_track_write_tracking_alloc(struct kvm_memory_slot *slot);
> +enum pg_level kvm_page_track_max_mapping_level(struct kvm *kvm, gfn_t gfn,
> + enum pg_level max_level);
>
> void kvm_page_track_free_memslot(struct kvm_memory_slot *slot);
> int kvm_page_track_create_memslot(struct kvm *kvm,
> diff --git a/arch/x86/kvm/mmu/page_track.c b/arch/x86/kvm/mmu/page_track.c
> index 2e09d1b6249f..69ea16c31859 100644
> --- a/arch/x86/kvm/mmu/page_track.c
> +++ b/arch/x86/kvm/mmu/page_track.c
> @@ -300,3 +300,21 @@ void kvm_page_track_flush_slot(struct kvm *kvm, struct kvm_memory_slot *slot)
> n->track_flush_slot(kvm, slot, n);
> srcu_read_unlock(&head->track_srcu, idx);
> }
> +
> +enum pg_level kvm_page_track_max_mapping_level(struct kvm *kvm, gfn_t gfn,
> + enum pg_level max_level)
> +{
> + struct kvm_memory_slot *slot;
> + int idx;
> +
> + idx = srcu_read_lock(&kvm->srcu);
> + slot = gfn_to_memslot(kvm, gfn);
> + if (!slot || slot->flags & KVM_MEMSLOT_INVALID)
> + max_level = PG_LEVEL_4K;
> + else
> + max_level = kvm_mmu_max_slot_mapping_level(slot, gfn, max_level);
> + srcu_read_unlock(&kvm->srcu, idx);
> +
> + return max_level;
> +}
> +EXPORT_SYMBOL_GPL(kvm_page_track_max_mapping_level);
> diff --git a/drivers/gpu/drm/i915/gvt/gtt.c b/drivers/gpu/drm/i915/gvt/gtt.c
> index d0fca53a3563..6736d7bd94ea 100644
> --- a/drivers/gpu/drm/i915/gvt/gtt.c
> +++ b/drivers/gpu/drm/i915/gvt/gtt.c
> @@ -1178,14 +1178,22 @@ static int is_2MB_gtt_possible(struct intel_vgpu *vgpu,
> struct intel_gvt_gtt_entry *entry)
> {
> const struct intel_gvt_gtt_pte_ops *ops = vgpu->gvt->gtt.pte_ops;
> + unsigned long gfn = ops->get_pfn(entry);
> kvm_pfn_t pfn;
> + int max_level;
>
> if (!HAS_PAGE_SIZES(vgpu->gvt->gt->i915, I915_GTT_PAGE_SIZE_2M))
> return 0;
>
> if (!vgpu->attached)
> return -EINVAL;
> - pfn = gfn_to_pfn(vgpu->vfio_device.kvm, ops->get_pfn(entry));
> +
> + max_level = kvm_page_track_max_mapping_level(vgpu->vfio_device.kvm,
> + gfn, PG_LEVEL_2M);
> + if (max_level < PG_LEVEL_2M)
> + return 0;
> +
> + pfn = gfn_to_pfn(vgpu->vfio_device.kvm, gfn);
> if (is_error_noslot_pfn(pfn))
> return -EINVAL;
>
> --
> 2.39.0.314.g84b9a713c41-goog
>
Powered by blists - more mailing lists