lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <AADFC41AFE54684AB9EE6CBC0274A5D19D645E55@SHSMSX104.ccr.corp.intel.com>
Date:   Tue, 17 Dec 2019 02:28:33 +0000
From:   "Tian, Kevin" <kevin.tian@...el.com>
To:     Paolo Bonzini <pbonzini@...hat.com>, Peter Xu <peterx@...hat.com>
CC:     "Christopherson, Sean J" <sean.j.christopherson@...el.com>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "kvm@...r.kernel.org" <kvm@...r.kernel.org>,
        "Dr . David Alan Gilbert" <dgilbert@...hat.com>,
        Vitaly Kuznetsov <vkuznets@...hat.com>,
        "Alex Williamson" <alex.williamson@...hat.com>,
        "Wang, Zhenyu Z" <zhenyu.z.wang@...el.com>,
        "Zhao, Yan Y" <yan.y.zhao@...el.com>
Subject: RE: [PATCH RFC 04/15] KVM: Implement ring-based dirty memory
 tracking

> From: Paolo Bonzini
> Sent: Monday, December 16, 2019 6:08 PM
> 
> [Alex and Kevin: there are doubts below regarding dirty page tracking
> from VFIO and mdev devices, which perhaps you can help with]
> 
> On 15/12/19 18:21, Peter Xu wrote:
> >                 init_rmode_tss
> >                     vmx_set_tss_addr
> >                         kvm_vm_ioctl_set_tss_addr [*]
> >                 init_rmode_identity_map
> >                     vmx_create_vcpu [*]
> 
> These don't matter because their content is not visible to userspace
> (the backing storage is mmap-ed by __x86_set_memory_region).  In fact, d
> 
> >                 vmx_write_pml_buffer
> >                     kvm_arch_write_log_dirty [&]
> >                 kvm_write_guest
> >                     kvm_hv_setup_tsc_page
> >                         kvm_guest_time_update [&]
> >                     nested_flush_cached_shadow_vmcs12 [&]
> >                     kvm_write_wall_clock [&]
> >                     kvm_pv_clock_pairing [&]
> >                     kvmgt_rw_gpa [?]
> 
> This then expands (partially) to
> 
> intel_gvt_hypervisor_write_gpa
>     emulate_csb_update
>         emulate_execlist_ctx_schedule_out
>             complete_execlist_workload
>                 complete_current_workload
>                      workload_thread
>         emulate_execlist_ctx_schedule_in
>             prepare_execlist_workload
>                 prepare_workload
>                     dispatch_workload
>                         workload_thread
> 
> So KVMGT is always writing to GPAs instead of IOVAs and basically
> bypassing a guest IOMMU.  So here it would be better if kvmgt was
> changed not use kvm_write_guest (also because I'd probably have nacked
> that if I had known :)).

I agree. 

> 
> As far as I know, there is some work on live migration with both VFIO
> and mdev, and that probably includes some dirty page tracking API.
> kvmgt could switch to that API, or there could be VFIO APIs similar to
> kvm_write_guest but taking IOVAs instead of GPAs.  Advantage: this would
> fix the GPA/IOVA confusion.  Disadvantage: userspace would lose the
> tracking of writes from mdev devices.  Kevin, are these writes used in
> any way?  Do the calls to intel_gvt_hypervisor_write_gpa covers all
> writes from kvmgt vGPUs, or can the hardware write to memory as well
> (which would be my guess if I didn't know anything about kvmgt, which I
> pretty much don't)?

intel_gvt_hypervisor_write_gpa covers all writes due to software mediation.

for hardware updates, it needs be mapped in IOMMU through vfio_pin_pages 
before any DMA happens. The ongoing dirty tracking effort in VFIO will take
every pinned page through that API as dirtied.

However, currently VFIO doesn't implement any vfio_read/write_guest
interface yet. and it doesn't make sense to use vfio_pin_pages for software
dirtied pages, as pin is unnecessary and heavy involving iommu invalidation.

Alex, if you are OK we'll work on such interface and move kvmgt to use it.
After it's accepted, we can also mark pages dirty through this new interface
in Kirti's dirty page tracking series.

Thanks
Kevin

> 
> > We should only need to look at the leaves of the traces because
> > they're where the dirty request starts.  I'm marking all the leaves
> > with below criteria then it'll be easier to focus:
> >
> > Cases with [*]: should not matter much
> >            [&]: actually with a per-vcpu context in the upper layer
> >            [?]: uncertain...
> >
> > I'm a bit amazed after I took these notes, since I found that besides
> > those that could probbaly be ignored (marked as [*]), most of the rest
> > per-vm dirty requests are actually with a vcpu context.
> >
> > Although now because we have kvm_get_running_vcpu() all cases for [&]
> > should be fine without changing anything, but I tend to add another
> > patch in the next post to convert all the [&] cases explicitly to pass
> > vcpu pointer instead of kvm pointer to be clear if no one disagrees,
> > then we verify that against kvm_get_running_vcpu().
> 
> This is a good idea but remember not to convert those to
> kvm_vcpu_write_guest, because you _don't_ want these writes to touch
> SMRAM (most of the addresses are OS-controlled rather than
> firmware-controlled).
> 
> > init_rmode_tss or init_rmode_identity_map.  But I've marked them as
> > unimportant because they should only happen once at boot.
> 
> We need to check if userspace can add an arbitrary number of entries by
> calling KVM_SET_TSS_ADDR repeatedly.  I think it can; we'd have to
> forbid multiple calls to KVM_SET_TSS_ADDR which is not a problem in
> general.
> 
> >>> If we're still with the rule in userspace that we first do RESET then
> >>> collect and send the pages (just like what we've discussed before),
> >>> then IMHO it's fine to have vcpu2 to skip the slow path?  Because
> >>> RESET happens at "treat page as not dirty", then if we are sure that
> >>> we only collect and send pages after that point, then the latest
> >>> "write to page" data from vcpu2 won't be lost even if vcpu2 is not
> >>> blocked by vcpu1's ring full?
> >>
> >> Good point, the race would become
> >>
> >>  	vCPU 1			vCPU 2		host
> >>  	---------------------------------------------------------------
> >>  	mark page dirty
> >>  				write to page
> >> 						reset rings
> >> 						  wait for mmu lock
> >>  	add page to ring
> >> 	release mmu lock
> >> 						  ...do reset...
> >> 						  release mmu lock
> >> 						page is now dirty
> >
> > Hmm, the page will be dirty after the reset, but is that an issue?
> >
> > Or, could you help me to identify what I've missed?
> 
> Nothing: the race is always solved in such a way that there's no issue.
> 
> >> I don't think that's possible, most writes won't come from a page fault
> >> path and cannot retry.
> >
> > Yep, maybe I should say it in the other way round: we only wait if
> > kvm_get_running_vcpu() == NULL.  Then in somewhere near
> > vcpu_enter_guest(), we add a check to wait if per-vcpu ring is full.
> > Would that work?
> 
> Yes, that should work, especially if we know that kvmgt is the only case
> that can wait.  And since:
> 
> 1) kvmgt doesn't really need dirty page tracking (because VFIO devices
> generally don't track dirty pages, and because kvmgt shouldn't be using
> kvm_write_guest anyway)
> 
> 2) the real mode TSS and identity map shouldn't even be tracked, as they
> are invisible to userspace
> 
> it seems to me that kvm_get_running_vcpu() lets us get rid of the per-VM
> ring altogether.
> 
> Paolo

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ