[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <878cbc47-316c-d508-a5a3-22029dee2203@redhat.com>
Date: Wed, 3 May 2017 16:57:19 +0200
From: Paolo Bonzini <pbonzini@...hat.com>
To: Xiao Guangrong <guangrong.xiao@...il.com>, mtosatti@...hat.com,
avi.kivity@...il.com, rkrcmar@...hat.com
Cc: kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
qemu-devel@...gnu.org, Xiao Guangrong <xiaoguangrong@...cent.com>
Subject: Re: [PATCH 0/7] KVM: MMU: fast write protect
On 03/05/2017 16:50, Xiao Guangrong wrote:
> Furthermore, userspace has no knowledge about if PML is enable (it
> can be required from sysfs, but it is a good way in QEMU), so it is
> difficult for the usespace to know when to use write-protect-all.
> Maybe we can make KVM_CAP_X86_WRITE_PROTECT_ALL_MEM return false if
> PML is enabled?
Yes, that's a good idea. Though it's a pity that, with PML, setting the
dirty bit will still do the massive walk of the rmap. At least with
reset_dirty_pages it's done a little bit at a time.
>> Also, I wonder how the alternative write protection mechanism would
>> affect performance of the dirty page ring buffer patches. You would do
>> the write protection of all memory at the end of
>> kvm_vm_ioctl_reset_dirty_pages. You wouldn't even need a separate
>> ioctl, which is nice. On the other hand, checkpoints would be more
>> frequent and most pages would be write-protected, so it would be more
>> expensive to rebuild the shadow page tables...
>
> Yup, write-protect-all can improve reset_dirty_pages indeed, i will
> apply your idea after reset_dirty_pages is merged.
>
> However, we still prefer to have a separate ioctl for write-protect-all
> which cooperates with KVM_GET_DIRTY_LOG to improve live migration that
> should not always depend on checkpoint.
Ok, I plan to merge the dirty ring pages early in 4.13 development.
Paolo
Powered by blists - more mailing lists