[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <AADFC41AFE54684AB9EE6CBC0274A5D19D646148@SHSMSX104.ccr.corp.intel.com>
Date: Tue, 17 Dec 2019 05:17:29 +0000
From: "Tian, Kevin" <kevin.tian@...el.com>
To: 'Paolo Bonzini' <pbonzini@...hat.com>, Peter Xu <peterx@...hat.com>
CC: "Christopherson, Sean J" <sean.j.christopherson@...el.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"kvm@...r.kernel.org" <kvm@...r.kernel.org>,
"Dr . David Alan Gilbert" <dgilbert@...hat.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
"Alex Williamson" <alex.williamson@...hat.com>,
"Wang, Zhenyu Z" <zhenyu.z.wang@...el.com>,
"Zhao, Yan Y" <yan.y.zhao@...el.com>
Subject: RE: [PATCH RFC 04/15] KVM: Implement ring-based dirty memory
tracking
> From: Tian, Kevin
> Sent: Tuesday, December 17, 2019 10:29 AM
>
> > From: Paolo Bonzini
> > Sent: Monday, December 16, 2019 6:08 PM
> >
> > [Alex and Kevin: there are doubts below regarding dirty page tracking
> > from VFIO and mdev devices, which perhaps you can help with]
> >
> > On 15/12/19 18:21, Peter Xu wrote:
> > > init_rmode_tss
> > > vmx_set_tss_addr
> > > kvm_vm_ioctl_set_tss_addr [*]
> > > init_rmode_identity_map
> > > vmx_create_vcpu [*]
> >
> > These don't matter because their content is not visible to userspace
> > (the backing storage is mmap-ed by __x86_set_memory_region). In fact, d
> >
> > > vmx_write_pml_buffer
> > > kvm_arch_write_log_dirty [&]
> > > kvm_write_guest
> > > kvm_hv_setup_tsc_page
> > > kvm_guest_time_update [&]
> > > nested_flush_cached_shadow_vmcs12 [&]
> > > kvm_write_wall_clock [&]
> > > kvm_pv_clock_pairing [&]
> > > kvmgt_rw_gpa [?]
> >
> > This then expands (partially) to
> >
> > intel_gvt_hypervisor_write_gpa
> > emulate_csb_update
> > emulate_execlist_ctx_schedule_out
> > complete_execlist_workload
> > complete_current_workload
> > workload_thread
> > emulate_execlist_ctx_schedule_in
> > prepare_execlist_workload
> > prepare_workload
> > dispatch_workload
> > workload_thread
> >
> > So KVMGT is always writing to GPAs instead of IOVAs and basically
> > bypassing a guest IOMMU. So here it would be better if kvmgt was
> > changed not use kvm_write_guest (also because I'd probably have nacked
> > that if I had known :)).
>
> I agree.
>
> >
> > As far as I know, there is some work on live migration with both VFIO
> > and mdev, and that probably includes some dirty page tracking API.
> > kvmgt could switch to that API, or there could be VFIO APIs similar to
> > kvm_write_guest but taking IOVAs instead of GPAs. Advantage: this would
> > fix the GPA/IOVA confusion. Disadvantage: userspace would lose the
> > tracking of writes from mdev devices. Kevin, are these writes used in
> > any way? Do the calls to intel_gvt_hypervisor_write_gpa covers all
> > writes from kvmgt vGPUs, or can the hardware write to memory as well
> > (which would be my guess if I didn't know anything about kvmgt, which I
> > pretty much don't)?
>
> intel_gvt_hypervisor_write_gpa covers all writes due to software mediation.
>
> for hardware updates, it needs be mapped in IOMMU through
> vfio_pin_pages
> before any DMA happens. The ongoing dirty tracking effort in VFIO will take
> every pinned page through that API as dirtied.
>
> However, currently VFIO doesn't implement any vfio_read/write_guest
> interface yet. and it doesn't make sense to use vfio_pin_pages for software
> dirtied pages, as pin is unnecessary and heavy involving iommu invalidation.
One correction. vfio_pin_pages doesn't involve iommu invalidation. I should
just mean that pinning the page is not necessary. We just need a kvm-like
interface based on hva to access.
>
> Alex, if you are OK we'll work on such interface and move kvmgt to use it.
> After it's accepted, we can also mark pages dirty through this new interface
> in Kirti's dirty page tracking series.
>
Powered by blists - more mailing lists