[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1597a424-9f62-824b-5308-c9622127d658@redhat.com>
Date: Wed, 11 Dec 2019 10:05:28 +0100
From: Paolo Bonzini <pbonzini@...hat.com>
To: "Michael S. Tsirkin" <mst@...hat.com>, Peter Xu <peterx@...hat.com>
Cc: Jason Wang <jasowang@...hat.com>, linux-kernel@...r.kernel.org,
kvm@...r.kernel.org,
Sean Christopherson <sean.j.christopherson@...el.com>,
"Dr . David Alan Gilbert" <dgilbert@...hat.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>
Subject: Re: [PATCH RFC 04/15] KVM: Implement ring-based dirty memory tracking
On 10/12/19 22:53, Michael S. Tsirkin wrote:
> On Tue, Dec 10, 2019 at 11:02:11AM -0500, Peter Xu wrote:
>> On Tue, Dec 10, 2019 at 02:31:54PM +0100, Paolo Bonzini wrote:
>>> On 10/12/19 14:25, Michael S. Tsirkin wrote:
>>>>> There is no new infrastructure to track the dirty pages---it's just a
>>>>> different way to pass them to userspace.
>>>> Did you guys consider using one of the virtio ring formats?
>>>> Maybe reusing vhost code?
>>>
>>> There are no used/available entries here, it's unidirectional
>>> (kernel->user).
>>
>> Agreed. Vring could be an overkill IMHO (the whole dirty_ring.c is
>> 100+ LOC only).
>
> I guess you don't do polling/ event suppression and other tricks that
> virtio came up with for speed then?
There are no interrupts either, so no need for event suppression. You
have vmexits when the ring gets full (and that needs to be synchronous),
but apart from that the migration thread will poll the rings once when
it needs to send more pages.
Paolo
Powered by blists - more mailing lists