[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191211171549.GF48697@xz-x1>
Date: Wed, 11 Dec 2019 12:15:49 -0500
From: Peter Xu <peterx@...hat.com>
To: Paolo Bonzini <pbonzini@...hat.com>
Cc: Christophe de Dinechin <dinechin@...hat.com>,
linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
Sean Christopherson <sean.j.christopherson@...el.com>,
"Dr . David Alan Gilbert" <dgilbert@...hat.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>
Subject: Re: [PATCH RFC 00/15] KVM: Dirty ring interface
On Wed, Dec 11, 2019 at 03:16:30PM +0100, Paolo Bonzini wrote:
> On 11/12/19 14:41, Christophe de Dinechin wrote:
> >
> > Peter Xu writes:
> >
> >> Branch is here: https://github.com/xzpeter/linux/tree/kvm-dirty-ring
> >>
> >> Overview
> >> ============
> >>
> >> This is a continued work from Lei Cao <lei.cao@...atus.com> and Paolo
> >> on the KVM dirty ring interface. To make it simple, I'll still start
> >> with version 1 as RFC.
> >>
> >> The new dirty ring interface is another way to collect dirty pages for
> >> the virtual machine, but it is different from the existing dirty
> >> logging interface in a few ways, majorly:
> >>
> >> - Data format: The dirty data was in a ring format rather than a
> >> bitmap format, so the size of data to sync for dirty logging does
> >> not depend on the size of guest memory any more, but speed of
> >> dirtying. Also, the dirty ring is per-vcpu (currently plus
> >> another per-vm ring, so total ring number is N+1), while the dirty
> >> bitmap is per-vm.
> >
> > I like Sean's suggestion to fetch rings when dirtying. That could reduce
> > the number of dirty rings to examine.
>
> What do you mean by "fetch rings"?
I'd wildly guess Christophe means something like we create a ring pool
and we try to find a ring to push the dirty gfn when it comes.
OK, should I count it as another vote to Sean's? :)
I agree, but imho larger number of rings won't really be a problem as
long as it's still per-vcpu (after all we have a vcpu number
limitation which is harder to break...). To me what Sean's suggestion
attracted me most is that the interface is cleaner, that we don't need
to expose the ring in two places any more. At the meantime, I won't
care too much on perf issue here because after all it's dirty logging.
If perf is critial, then I think I'll certainly choose per-vcpu ring
without doubt even if it complicates the interface because it'll
certainly help on some conditional lockless.
>
> > Also, as is, this means that the same gfn may be present in multiple
> > rings, right?
>
> I think the actual marking of a page as dirty is protected by a spinlock
> but I will defer to Peter on this.
In most cases imho we should be with the mmu lock iiuc because the
general mmu page fault will take it. However I think there're special
cases:
- when the spte was popped already and just write protected, then
it's very possible we can go via the quick page fault path
(fast_page_fault()). That is lockless (no mmu lock taken).
- when there's no vcpu context, we'll use the per-vm ring. Though
per-vm ring is locked (per-vcpu ring is not!), I don't see how it
would protect two callers to insert two identical gfns
sequentially.. Also it can happen between per-vm and per-vcpu ring
as well.
So I think gfn duplication could happen, but it should be rare. Even
if it happens, it won't hurt much because the 2nd/3rd/... dirty bit of
the same gfn will simply be skipped by userspace when harvesting.
>
> Paolo
>
> >>
> >> - Data copy: The sync of dirty pages does not need data copy any more,
> >> but instead the ring is shared between the userspace and kernel by
> >> page sharings (mmap() on either the vm fd or vcpu fd)
> >>
> >> - Interface: Instead of using the old KVM_GET_DIRTY_LOG,
> >> KVM_CLEAR_DIRTY_LOG interfaces, the new ring uses a new interface
> >> called KVM_RESET_DIRTY_RINGS when we want to reset the collected
> >> dirty pages to protected mode again (works like
> >> KVM_CLEAR_DIRTY_LOG, but ring based)
> >>
> >> And more.
> >>
> >> I would appreciate if the reviewers can start with patch "KVM:
> >> Implement ring-based dirty memory tracking", especially the document
> >> update part for the big picture. Then I'll avoid copying into most of
> >> them into cover letter again.
> >>
> >> I marked this series as RFC because I'm at least uncertain on this
> >> change of vcpu_enter_guest():
> >>
> >> if (kvm_check_request(KVM_REQ_DIRTY_RING_FULL, vcpu)) {
> >> vcpu->run->exit_reason = KVM_EXIT_DIRTY_RING_FULL;
> >> /*
> >> * If this is requested, it means that we've
> >> * marked the dirty bit in the dirty ring BUT
> >> * we've not written the date. Do it now.
> >
> > not written the "data" ?
Yep, though I'll drop these lines altogether so we'll be fine.. :)
Thanks,
--
Peter Xu
Powered by blists - more mailing lists