[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200108190639.GE7096@xz-x1>
Date: Wed, 8 Jan 2020 14:06:39 -0500
From: Peter Xu <peterx@...hat.com>
To: Paolo Bonzini <pbonzini@...hat.com>
Cc: kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
"Dr . David Alan Gilbert" <dgilbert@...hat.com>,
Christophe de Dinechin <dinechin@...hat.com>,
Sean Christopherson <sean.j.christopherson@...el.com>,
"Michael S . Tsirkin" <mst@...hat.com>,
Jason Wang <jasowang@...hat.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Lei Cao <lei.cao@...atus.com>
Subject: Re: [PATCH RESEND v2 08/17] KVM: X86: Implement ring-based dirty
memory tracking
On Wed, Jan 08, 2020 at 06:41:06PM +0100, Paolo Bonzini wrote:
> On 08/01/20 16:52, Peter Xu wrote:
> > here, which is still a bit tricky to makeup the kvmgt issue.
> >
> > Now we still have the waitqueue but it'll only be used for
> > no-vcpu-context dirtyings, so:
> >
> > - For no-vcpu-context: thread could wait in the waitqueue if it makes
> > vcpu0's ring soft-full (note, previously it was hard-full, so here
> > we make it easier to wait so we make sure )
> >
> > - For with-vcpu-context: we should never wait, guaranteed by the fact
> > that KVM_RUN will return now if soft-full for that vcpu ring, and
> > above waitqueue will make sure even vcpu0's waitqueue won't be
> > filled up by kvmgt
> >
> > Again this is still a workaround for kvmgt and I think it should not
> > be needed after the refactoring. It's just a way to not depend on
> > that work so this should work even with current kvmgt.
>
> The kvmgt patches were posted, you could just include them in your next
> series and clean everything up. You can get them at
> https://patchwork.kernel.org/cover/11316219/.
Good to know!
Maybe I'll simply drop all the redundants in the dirty ring series
assuming it's there? Since these patchsets should not overlap with
each other (so looks more like an ordering constraints for merging).
Thanks,
--
Peter Xu
Powered by blists - more mailing lists