[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200115152711.GG233443@xz-x1>
Date: Wed, 15 Jan 2020 10:27:11 -0500
From: Peter Xu <peterx@...hat.com>
To: "Michael S. Tsirkin" <mst@...hat.com>
Cc: kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
Christophe de Dinechin <dinechin@...hat.com>,
Paolo Bonzini <pbonzini@...hat.com>,
Sean Christopherson <sean.j.christopherson@...el.com>,
Yan Zhao <yan.y.zhao@...el.com>,
Alex Williamson <alex.williamson@...hat.com>,
Jason Wang <jasowang@...hat.com>,
Kevin Kevin <kevin.tian@...el.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
"Dr . David Alan Gilbert" <dgilbert@...hat.com>,
Lei Cao <lei.cao@...atus.com>
Subject: Re: [PATCH v3 12/21] KVM: X86: Implement ring-based dirty memory
tracking
On Wed, Jan 15, 2020 at 01:47:15AM -0500, Michael S. Tsirkin wrote:
> > diff --git a/include/linux/kvm_dirty_ring.h b/include/linux/kvm_dirty_ring.h
> > new file mode 100644
> > index 000000000000..d6fe9e1b7617
> > --- /dev/null
> > +++ b/include/linux/kvm_dirty_ring.h
> > @@ -0,0 +1,55 @@
> > +#ifndef KVM_DIRTY_RING_H
> > +#define KVM_DIRTY_RING_H
> > +
> > +/**
> > + * kvm_dirty_ring: KVM internal dirty ring structure
> > + *
> > + * @dirty_index: free running counter that points to the next slot in
> > + * dirty_ring->dirty_gfns, where a new dirty page should go
> > + * @reset_index: free running counter that points to the next dirty page
> > + * in dirty_ring->dirty_gfns for which dirty trap needs to
> > + * be reenabled
> > + * @size: size of the compact list, dirty_ring->dirty_gfns
> > + * @soft_limit: when the number of dirty pages in the list reaches this
> > + * limit, vcpu that owns this ring should exit to userspace
> > + * to allow userspace to harvest all the dirty pages
> > + * @dirty_gfns: the array to keep the dirty gfns
> > + * @indices: the pointer to the @kvm_dirty_ring_indices structure
> > + * of this specific ring
> > + * @index: index of this dirty ring
> > + */
> > +struct kvm_dirty_ring {
> > + u32 dirty_index;
> > + u32 reset_index;
> > + u32 size;
> > + u32 soft_limit;
> > + struct kvm_dirty_gfn *dirty_gfns;
>
> Here would be a good place to document that accessing
> shared page like this is only safe if archotecture is physically
> tagged.
Right, more importantly is where to document for kvm_run, and any
other shared mappings that I'm not yet aware of across the whole KVM.
[...]
> > +/*
> > + * The following are the requirements for supporting dirty log ring
> > + * (by enabling KVM_DIRTY_LOG_PAGE_OFFSET).
> > + *
> > + * 1. Memory accesses by KVM should call kvm_vcpu_write_* instead
> > + * of kvm_write_* so that the global dirty ring is not filled up
> > + * too quickly.
> > + * 2. kvm_arch_mmu_enable_log_dirty_pt_masked should be defined for
> > + * enabling dirty logging.
> > + * 3. There should not be a separate step to synchronize hardware
> > + * dirty bitmap with KVM's.
>
>
> Are these requirement from an architecture? Then you want to move
> this out of UAPI, keep things relevant to userspace there.
Good point, I removed it, and instead of this...
>
> > + */
> > +
> > +struct kvm_dirty_gfn {
> > + __u32 pad;
> > + __u32 slot;
> > + __u64 offset;
> > +};
> > +
>
> Pls add comments about how kvm_dirty_gfn must be mmapped.
... I added this:
/*
* KVM dirty rings should be mapped at KVM_DIRTY_LOG_PAGE_OFFSET of
* per-vcpu mmaped regions as an array of struct kvm_dirty_gfn. The
* size of the gfn buffer is decided by the first argument when
* enabling KVM_CAP_DIRTY_LOG_RING.
*/
Thanks,
--
Peter Xu
Powered by blists - more mailing lists