[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191203191328.GD19877@linux.intel.com>
Date: Tue, 3 Dec 2019 11:13:28 -0800
From: Sean Christopherson <sean.j.christopherson@...el.com>
To: Peter Xu <peterx@...hat.com>
Cc: linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
Paolo Bonzini <pbonzini@...hat.com>,
"Dr . David Alan Gilbert" <dgilbert@...hat.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>
Subject: Re: [PATCH RFC 04/15] KVM: Implement ring-based dirty memory tracking
On Fri, Nov 29, 2019 at 04:34:54PM -0500, Peter Xu wrote:
> +static void mark_page_dirty_in_ring(struct kvm *kvm,
> + struct kvm_vcpu *vcpu,
> + struct kvm_memory_slot *slot,
> + gfn_t gfn)
> +{
> + u32 as_id = 0;
Redundant initialization of as_id.
> + u64 offset;
> + int ret;
> + struct kvm_dirty_ring *ring;
> + struct kvm_dirty_ring_indexes *indexes;
> + bool is_vm_ring;
> +
> + if (!kvm->dirty_ring_size)
> + return;
> +
> + offset = gfn - slot->base_gfn;
> +
> + if (vcpu) {
> + as_id = kvm_arch_vcpu_memslots_id(vcpu);
> + } else {
> + as_id = 0;
The setting of as_id is wrong, both with and without a vCPU. as_id should
come from slot->as_id. It may not be actually broken in the current code
base, but at best it's fragile, e.g. Ben's TDP MMU rewrite[*] adds a call
to mark_page_dirty_in_slot() with a potentially non-zero as_id.
[*] https://lkml.kernel.org/r/20190926231824.149014-25-bgardon@google.com
> + vcpu = kvm_get_running_vcpu();
> + }
> +
> + if (vcpu) {
> + ring = &vcpu->dirty_ring;
> + indexes = &vcpu->run->vcpu_ring_indexes;
> + is_vm_ring = false;
> + } else {
> + /*
> + * Put onto per vm ring because no vcpu context. Kick
> + * vcpu0 if ring is full.
> + */
> + vcpu = kvm->vcpus[0];
Is this a rare event?
> + ring = &kvm->vm_dirty_ring;
> + indexes = &kvm->vm_run->vm_ring_indexes;
> + is_vm_ring = true;
> + }
> +
> + ret = kvm_dirty_ring_push(ring, indexes,
> + (as_id << 16)|slot->id, offset,
> + is_vm_ring);
> + if (ret < 0) {
> + if (is_vm_ring)
> + pr_warn_once("vcpu %d dirty log overflow\n",
> + vcpu->vcpu_id);
> + else
> + pr_warn_once("per-vm dirty log overflow\n");
> + return;
> + }
> +
> + if (ret)
> + kvm_make_request(KVM_REQ_DIRTY_RING_FULL, vcpu);
> +}
Powered by blists - more mailing lists