[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1355422f-ab62-9dc3-2b48-71a6e221786b@redhat.com>
Date: Wed, 4 Dec 2019 18:38:35 +0800
From: Jason Wang <jasowang@...hat.com>
To: Peter Xu <peterx@...hat.com>, linux-kernel@...r.kernel.org,
kvm@...r.kernel.org
Cc: Sean Christopherson <sean.j.christopherson@...el.com>,
Paolo Bonzini <pbonzini@...hat.com>,
"Dr . David Alan Gilbert" <dgilbert@...hat.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
"Michael S. Tsirkin" <mst@...hat.com>
Subject: Re: [PATCH RFC 04/15] KVM: Implement ring-based dirty memory tracking
On 2019/11/30 上午5:34, Peter Xu wrote:
> +int kvm_dirty_ring_push(struct kvm_dirty_ring *ring,
> + struct kvm_dirty_ring_indexes *indexes,
> + u32 slot, u64 offset, bool lock)
> +{
> + int ret;
> + struct kvm_dirty_gfn *entry;
> +
> + if (lock)
> + spin_lock(&ring->lock);
> +
> + if (kvm_dirty_ring_full(ring)) {
> + ret = -EBUSY;
> + goto out;
> + }
> +
> + entry = &ring->dirty_gfns[ring->dirty_index & (ring->size - 1)];
> + entry->slot = slot;
> + entry->offset = offset;
Haven't gone through the whole series, sorry if it was a silly question
but I wonder things like this will suffer from similar issue on
virtually tagged archs as mentioned in [1].
Is this better to allocate the ring from userspace and set to KVM
instead? Then we can use copy_to/from_user() friends (a little bit slow
on recent CPUs).
[1] https://lkml.org/lkml/2019/4/9/5
Thanks
> + smp_wmb();
> + ring->dirty_index++;
> + WRITE_ONCE(indexes->avail_index, ring->dirty_index);
> + ret = kvm_dirty_ring_used(ring) >= ring->soft_limit;
> + pr_info("%s: slot %u offset %llu used %u\n",
> + __func__, slot, offset, kvm_dirty_ring_used(ring));
> +
> +out:
Powered by blists - more mailing lists