lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 3 May 2017 14:28:16 +0200
From:   Paolo Bonzini <pbonzini@...hat.com>
To:     guangrong.xiao@...il.com, mtosatti@...hat.com,
        avi.kivity@...il.com, rkrcmar@...hat.com
Cc:     kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
        qemu-devel@...gnu.org, Xiao Guangrong <xiaoguangrong@...cent.com>
Subject: Re: [PATCH 0/7] KVM: MMU: fast write protect

So if I understand correctly this relies on userspace doing:

	1) KVM_GET_DIRTY_LOG without write protect
	2) KVM_WRITE_PROTECT_ALL_MEM
	<only look now at the dirty log snapshot>

Writes may happen between 1 and 2; they are not represented in the live
dirty bitmap but it's okay because they are in the snapshot and will
only be used after 2.  This is similar to what the dirty page ring
buffer patches do; in fact, the KVM_WRITE_PROTECT_ALL_MEM ioctl is very
similar to KVM_RESET_DIRTY_PAGES in those patches.

On 03/05/2017 12:52, guangrong.xiao@...il.com wrote:
> Comparing with the ordinary algorithm which
> write protects last level sptes based on the rmap one by one,
> it just simply updates the generation number to ask all vCPUs
> to reload its root page table, particularly, it can be done out
> of mmu-lock, so that it does not hurt vMMU's parallel.

This is clever.

For processors that have PML, write protecting is only done on large
pages and only for splitting purposes; not for dirty page tracking
process at 4k granularity.  In this case, I think that you should do
nothing in the new write-protect-all ioctl?

Also, I wonder how the alternative write protection mechanism would
affect performance of the dirty page ring buffer patches.  You would do
the write protection of all memory at the end of
kvm_vm_ioctl_reset_dirty_pages.  You wouldn't even need a separate
ioctl, which is nice.  On the other hand, checkpoints would be more
frequent and most pages would be write-protected, so it would be more
expensive to rebuild the shadow page tables...

Thanks,

Paolo

> @@ -490,6 +511,7 @@ static int kvm_physical_sync_dirty_bitmap(KVMMemoryListener *kml,
>          memset(d.dirty_bitmap, 0, allocated_size);
>  
>          d.slot = mem->slot | (kml->as_id << 16);
> +        d.flags = kvm_write_protect_all ? KVM_DIRTY_LOG_WITHOUT_WRITE_PROTECT : 0;
>          if (kvm_vm_ioctl(s, KVM_GET_DIRTY_LOG, &d) == -1) {
>              DPRINTF("ioctl failed %d\n", errno);
>              ret = -1;

How would this work when kvm_physical_sync_dirty_bitmap is called from
memory_region_sync_dirty_bitmap rather than
memory_region_global_dirty_log_sync?

Thanks,

Paolo

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ