[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <71228787-1cbc-4287-87a3-cda9aabcca3f@linux.intel.com>
Date: Tue, 13 May 2025 09:25:39 +0800
From: Binbin Wu <binbin.wu@...ux.intel.com>
To: Sean Christopherson <seanjc@...gle.com>
Cc: Paolo Bonzini <pbonzini@...hat.com>, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org, Peter Xu <peterx@...hat.com>,
Yan Zhao <yan.y.zhao@...el.com>, Maxim Levitsky <mlevitsk@...hat.com>
Subject: Re: [PATCH v2 1/5] KVM: Bound the number of dirty ring entries in a
single reset at INT_MAX
On 5/8/2025 10:10 PM, Sean Christopherson wrote:
> Cap the number of ring entries that are reset in a single ioctl to INT_MAX
> to ensure userspace isn't confused by a wrap into negative space, and so
> that, in a truly pathological scenario, KVM doesn't miss a TLB flush due
> to the count wrapping to zero. While the size of the ring is fixed at
> 0x10000 entries and KVM (currently) supports at most 4096, userspace is
> allowed to harvest entries from the ring while the reset is in-progress,
> i.e. it's possible for the ring to always have harvested entries.
>
> Opportunistically return an actual error code from the helper so that a
> future fix to handle pending signals can gracefully return -EINTR.
>
> Cc: Peter Xu <peterx@...hat.com>
> Cc: Yan Zhao <yan.y.zhao@...el.com>
> Cc: Maxim Levitsky <mlevitsk@...hat.com>
> Fixes: fb04a1eddb1a ("KVM: X86: Implement ring-based dirty memory tracking")
> Signed-off-by: Sean Christopherson <seanjc@...gle.com>
> ---
> include/linux/kvm_dirty_ring.h | 8 +++++---
> virt/kvm/dirty_ring.c | 10 +++++-----
> virt/kvm/kvm_main.c | 9 ++++++---
> 3 files changed, 16 insertions(+), 11 deletions(-)
>
> diff --git a/include/linux/kvm_dirty_ring.h b/include/linux/kvm_dirty_ring.h
> index da4d9b5f58f1..ee61ff6c3fe4 100644
> --- a/include/linux/kvm_dirty_ring.h
> +++ b/include/linux/kvm_dirty_ring.h
> @@ -49,9 +49,10 @@ static inline int kvm_dirty_ring_alloc(struct kvm *kvm, struct kvm_dirty_ring *r
> }
>
> static inline int kvm_dirty_ring_reset(struct kvm *kvm,
> - struct kvm_dirty_ring *ring)
> + struct kvm_dirty_ring *ring,
> + int *nr_entries_reset)
> {
> - return 0;
> + return -ENOENT;
> }
>
> static inline void kvm_dirty_ring_push(struct kvm_vcpu *vcpu,
> @@ -82,7 +83,8 @@ int kvm_dirty_ring_alloc(struct kvm *kvm, struct kvm_dirty_ring *ring,
> * called with kvm->slots_lock held, returns the number of
> * processed pages.
> */
The comment should be updated as well, since the return value is not the
number of processed pages now.
> -int kvm_dirty_ring_reset(struct kvm *kvm, struct kvm_dirty_ring *ring);
> +int kvm_dirty_ring_reset(struct kvm *kvm, struct kvm_dirty_ring *ring,
> + int *nr_entries_reset);
>
[...]
Powered by blists - more mailing lists