[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <9f7582b1-cfba-d096-2216-c5b06edc6ca9@redhat.com>
Date: Wed, 8 Jan 2020 18:41:06 +0100
From: Paolo Bonzini <pbonzini@...hat.com>
To: Peter Xu <peterx@...hat.com>, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org
Cc: "Dr . David Alan Gilbert" <dgilbert@...hat.com>,
Christophe de Dinechin <dinechin@...hat.com>,
Sean Christopherson <sean.j.christopherson@...el.com>,
"Michael S . Tsirkin" <mst@...hat.com>,
Jason Wang <jasowang@...hat.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Lei Cao <lei.cao@...atus.com>
Subject: Re: [PATCH RESEND v2 08/17] KVM: X86: Implement ring-based dirty
memory tracking
On 08/01/20 16:52, Peter Xu wrote:
> here, which is still a bit tricky to makeup the kvmgt issue.
>
> Now we still have the waitqueue but it'll only be used for
> no-vcpu-context dirtyings, so:
>
> - For no-vcpu-context: thread could wait in the waitqueue if it makes
> vcpu0's ring soft-full (note, previously it was hard-full, so here
> we make it easier to wait so we make sure )
>
> - For with-vcpu-context: we should never wait, guaranteed by the fact
> that KVM_RUN will return now if soft-full for that vcpu ring, and
> above waitqueue will make sure even vcpu0's waitqueue won't be
> filled up by kvmgt
>
> Again this is still a workaround for kvmgt and I think it should not
> be needed after the refactoring. It's just a way to not depend on
> that work so this should work even with current kvmgt.
The kvmgt patches were posted, you could just include them in your next
series and clean everything up. You can get them at
https://patchwork.kernel.org/cover/11316219/.
Paolo
Powered by blists - more mailing lists