[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f386bca9-b967-2e76-c580-465463843aa4@redhat.com>
Date: Wed, 11 Dec 2019 15:16:30 +0100
From: Paolo Bonzini <pbonzini@...hat.com>
To: Christophe de Dinechin <dinechin@...hat.com>,
Peter Xu <peterx@...hat.com>
Cc: linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
Sean Christopherson <sean.j.christopherson@...el.com>,
"Dr . David Alan Gilbert" <dgilbert@...hat.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>
Subject: Re: [PATCH RFC 00/15] KVM: Dirty ring interface
On 11/12/19 14:41, Christophe de Dinechin wrote:
>
> Peter Xu writes:
>
>> Branch is here: https://github.com/xzpeter/linux/tree/kvm-dirty-ring
>>
>> Overview
>> ============
>>
>> This is a continued work from Lei Cao <lei.cao@...atus.com> and Paolo
>> on the KVM dirty ring interface. To make it simple, I'll still start
>> with version 1 as RFC.
>>
>> The new dirty ring interface is another way to collect dirty pages for
>> the virtual machine, but it is different from the existing dirty
>> logging interface in a few ways, majorly:
>>
>> - Data format: The dirty data was in a ring format rather than a
>> bitmap format, so the size of data to sync for dirty logging does
>> not depend on the size of guest memory any more, but speed of
>> dirtying. Also, the dirty ring is per-vcpu (currently plus
>> another per-vm ring, so total ring number is N+1), while the dirty
>> bitmap is per-vm.
>
> I like Sean's suggestion to fetch rings when dirtying. That could reduce
> the number of dirty rings to examine.
What do you mean by "fetch rings"?
> Also, as is, this means that the same gfn may be present in multiple
> rings, right?
I think the actual marking of a page as dirty is protected by a spinlock
but I will defer to Peter on this.
Paolo
>>
>> - Data copy: The sync of dirty pages does not need data copy any more,
>> but instead the ring is shared between the userspace and kernel by
>> page sharings (mmap() on either the vm fd or vcpu fd)
>>
>> - Interface: Instead of using the old KVM_GET_DIRTY_LOG,
>> KVM_CLEAR_DIRTY_LOG interfaces, the new ring uses a new interface
>> called KVM_RESET_DIRTY_RINGS when we want to reset the collected
>> dirty pages to protected mode again (works like
>> KVM_CLEAR_DIRTY_LOG, but ring based)
>>
>> And more.
>>
>> I would appreciate if the reviewers can start with patch "KVM:
>> Implement ring-based dirty memory tracking", especially the document
>> update part for the big picture. Then I'll avoid copying into most of
>> them into cover letter again.
>>
>> I marked this series as RFC because I'm at least uncertain on this
>> change of vcpu_enter_guest():
>>
>> if (kvm_check_request(KVM_REQ_DIRTY_RING_FULL, vcpu)) {
>> vcpu->run->exit_reason = KVM_EXIT_DIRTY_RING_FULL;
>> /*
>> * If this is requested, it means that we've
>> * marked the dirty bit in the dirty ring BUT
>> * we've not written the date. Do it now.
>
> not written the "data" ?
>
>> */
>> r = kvm_emulate_instruction(vcpu, 0);
>> r = r >= 0 ? 0 : r;
>> goto out;
>> }
>>
>> I did a kvm_emulate_instruction() when dirty ring reaches softlimit
>> and want to exit to userspace, however I'm not really sure whether
>> there could have any side effect. I'd appreciate any comment of
>> above, or anything else.
>>
>> Tests
>> ===========
>>
>> I wanted to continue work on the QEMU part, but after I noticed that
>> the interface might still prone to change, I posted this series first.
>> However to make sure it's at least working, I've provided unit tests
>> together with the series. The unit tests should be able to test the
>> series in at least three major paths:
>>
>> (1) ./dirty_log_test -M dirty-ring
>>
>> This tests async ring operations: this should be the major work
>> mode for the dirty ring interface, say, when the kernel is
>> queuing more data, the userspace is collecting too. Ring can
>> hardly reaches full when working like this, because in most
>> cases the collection could be fast.
>>
>> (2) ./dirty_log_test -M dirty-ring -c 1024
>>
>> This set the ring size to be very small so that ring soft-full
>> always triggers (soft-full is a soft limit of the ring state,
>> when the dirty ring reaches the soft limit it'll do a userspace
>> exit and let the userspace to collect the data).
>>
>> (3) ./dirty_log_test -M dirty-ring-wait-queue
>>
>> This sololy test the extreme case where ring is full. When the
>> ring is completely full, the thread (no matter vcpu or not) will
>> be put onto a per-vm waitqueue, and KVM_RESET_DIRTY_RINGS will
>> wake the threads up (assuming until which the ring will not be
>> full any more).
>
> Am I correct assuming that guest memory can be dirtied by DMA operations?
> Should
>
> Not being that familiar with the current implementation of dirty page
> tracking, I wonder who marks the pages dirty in that case, and when?
> If the VM ring is used for I/O threads, isn't it possible that a large
> DMA could dirty a sufficiently large number of GFNs to overflow the
> associated ring? Does this case need a separate way to queue the
> dirtying I/O thread?
>
>>
>> Thanks,
>>
>> Cao, Lei (2):
>> KVM: Add kvm/vcpu argument to mark_dirty_page_in_slot
>> KVM: X86: Implement ring-based dirty memory tracking
>>
>> Paolo Bonzini (1):
>> KVM: Move running VCPU from ARM to common code
>>
>> Peter Xu (12):
>> KVM: Add build-time error check on kvm_run size
>> KVM: Implement ring-based dirty memory tracking
>> KVM: Make dirty ring exclusive to dirty bitmap log
>> KVM: Introduce dirty ring wait queue
>> KVM: selftests: Always clear dirty bitmap after iteration
>> KVM: selftests: Sync uapi/linux/kvm.h to tools/
>> KVM: selftests: Use a single binary for dirty/clear log test
>> KVM: selftests: Introduce after_vcpu_run hook for dirty log test
>> KVM: selftests: Add dirty ring buffer test
>> KVM: selftests: Let dirty_log_test async for dirty ring test
>> KVM: selftests: Add "-c" parameter to dirty log test
>> KVM: selftests: Test dirty ring waitqueue
>>
>> Documentation/virt/kvm/api.txt | 116 +++++
>> arch/arm/include/asm/kvm_host.h | 2 -
>> arch/arm64/include/asm/kvm_host.h | 2 -
>> arch/x86/include/asm/kvm_host.h | 5 +
>> arch/x86/include/uapi/asm/kvm.h | 1 +
>> arch/x86/kvm/Makefile | 3 +-
>> arch/x86/kvm/mmu/mmu.c | 6 +
>> arch/x86/kvm/vmx/vmx.c | 7 +
>> arch/x86/kvm/x86.c | 12 +
>> include/linux/kvm_dirty_ring.h | 67 +++
>> include/linux/kvm_host.h | 37 ++
>> include/linux/kvm_types.h | 1 +
>> include/uapi/linux/kvm.h | 36 ++
>> tools/include/uapi/linux/kvm.h | 47 ++
>> tools/testing/selftests/kvm/Makefile | 2 -
>> .../selftests/kvm/clear_dirty_log_test.c | 2 -
>> tools/testing/selftests/kvm/dirty_log_test.c | 452 ++++++++++++++++--
>> .../testing/selftests/kvm/include/kvm_util.h | 6 +
>> tools/testing/selftests/kvm/lib/kvm_util.c | 103 ++++
>> .../selftests/kvm/lib/kvm_util_internal.h | 5 +
>> virt/kvm/arm/arm.c | 29 --
>> virt/kvm/arm/perf.c | 6 +-
>> virt/kvm/arm/vgic/vgic-mmio.c | 15 +-
>> virt/kvm/dirty_ring.c | 156 ++++++
>> virt/kvm/kvm_main.c | 315 +++++++++++-
>> 25 files changed, 1329 insertions(+), 104 deletions(-)
>> create mode 100644 include/linux/kvm_dirty_ring.h
>> delete mode 100644 tools/testing/selftests/kvm/clear_dirty_log_test.c
>> create mode 100644 virt/kvm/dirty_ring.c
>
>
> --
> Cheers,
> Christophe de Dinechin (IRC c3d)
>
Powered by blists - more mailing lists