[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <24cf519e-5efa-85a7-9bc0-9be15957eb0a@redhat.com>
Date: Wed, 4 Dec 2019 11:14:19 +0100
From: Paolo Bonzini <pbonzini@...hat.com>
To: Sean Christopherson <sean.j.christopherson@...el.com>,
Peter Xu <peterx@...hat.com>
Cc: linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
"Dr . David Alan Gilbert" <dgilbert@...hat.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>
Subject: Re: [PATCH RFC 04/15] KVM: Implement ring-based dirty memory tracking
On 03/12/19 20:13, Sean Christopherson wrote:
> The setting of as_id is wrong, both with and without a vCPU. as_id should
> come from slot->as_id.
Which doesn't exist, but is an excellent suggestion nevertheless.
>> + /*
>> + * Put onto per vm ring because no vcpu context. Kick
>> + * vcpu0 if ring is full.
>> + */
>> + vcpu = kvm->vcpus[0];
>
> Is this a rare event?
Yes, every time a vCPU exit happens, the vCPU is supposed to reap the VM
ring as well. (Most of the time it will be empty, and while the reaping
of VM ring entries needs locking, the emptiness check doesn't).
Paolo
>> + ring = &kvm->vm_dirty_ring;
>> + indexes = &kvm->vm_run->vm_ring_indexes;
>> + is_vm_ring = true;
>> + }
>> +
>> + ret = kvm_dirty_ring_push(ring, indexes,
>> + (as_id << 16)|slot->id, offset,
>> + is_vm_ring);
>> + if (ret < 0) {
>> + if (is_vm_ring)
>> + pr_warn_once("vcpu %d dirty log overflow\n",
>> + vcpu->vcpu_id);
>> + else
>> + pr_warn_once("per-vm dirty log overflow\n");
>> + return;
>> + }
>> +
>> + if (ret)
>> + kvm_make_request(KVM_REQ_DIRTY_RING_FULL, vcpu);
>> +}
>
Powered by blists - more mailing lists