[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200109192116.GE36997@xz-x1>
Date: Thu, 9 Jan 2020 14:21:16 -0500
From: Peter Xu <peterx@...hat.com>
To: Alex Williamson <alex.williamson@...hat.com>
Cc: "Michael S. Tsirkin" <mst@...hat.com>, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org,
Christophe de Dinechin <dinechin@...hat.com>,
Paolo Bonzini <pbonzini@...hat.com>,
Sean Christopherson <sean.j.christopherson@...el.com>,
Yan Zhao <yan.y.zhao@...el.com>,
Jason Wang <jasowang@...hat.com>,
Kevin Kevin <kevin.tian@...el.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
"Dr . David Alan Gilbert" <dgilbert@...hat.com>,
Lei Cao <lei.cao@...atus.com>
Subject: Re: [PATCH v3 12/21] KVM: X86: Implement ring-based dirty memory
tracking
On Thu, Jan 09, 2020 at 09:56:10AM -0700, Alex Williamson wrote:
[...]
> > > +Dirty GFNs (Guest Frame Numbers) are stored in the dirty_gfns array.
> > > +For each of the dirty entry it's defined as:
> > > +
> > > +struct kvm_dirty_gfn {
> > > + __u32 pad;
> >
> > How about sticking a length here?
> > This way huge pages can be dirtied in one go.
>
> Not just huge pages, but any contiguous range of dirty pages could be
> reported far more concisely. Thanks,
I replied in the other thread on why I thought KVM might not suite
that (while vfio may).
Actually we can even do that for KVM as long as we keep a per-vcpu
last-dirtied GFN range cache (so we don't publish a dirty GFN right
after it's dirtied), then we grow that cached dirtied range as long as
the continuous next/previous page is dirtied. If we found that the
current dirty GFN is not continuous to the cached range, we publish
the cached range and let the new GFN be the starting of last-dirtied
GFN range cache.
However I am not sure how much we'll gain from it. Maybe we can do
that when we have a real use case for it. For now I'm not sure
whether it would worth the effort.
Thanks,
--
Peter Xu
Powered by blists - more mailing lists