[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aCzUIsn1ZF2lEOJ-@x1.local>
Date: Tue, 20 May 2025 15:12:34 -0400
From: Peter Xu <peterx@...hat.com>
To: Sean Christopherson <seanjc@...gle.com>
Cc: Paolo Bonzini <pbonzini@...hat.com>, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org, Yan Zhao <yan.y.zhao@...el.com>,
Maxim Levitsky <mlevitsk@...hat.com>,
Binbin Wu <binbin.wu@...ux.intel.com>,
James Houghton <jthoughton@...gle.com>,
Pankaj Gupta <pankaj.gupta@....com>
Subject: Re: [PATCH v3 0/6] KVM: Dirty ring fixes and cleanups
On Fri, May 16, 2025 at 02:35:34PM -0700, Sean Christopherson wrote:
> Sean Christopherson (6):
> KVM: Bound the number of dirty ring entries in a single reset at
> INT_MAX
> KVM: Bail from the dirty ring reset flow if a signal is pending
> KVM: Conditionally reschedule when resetting the dirty ring
> KVM: Check for empty mask of harvested dirty ring entries in caller
> KVM: Use mask of harvested dirty ring entries to coalesce dirty ring
> resets
> KVM: Assert that slots_lock is held when resetting per-vCPU dirty
> rings
For the last one, I'd think it's majorly because of the memslot accesses
(or CONFIG_LOCKDEP=y should yell already on resets?). The "serialization
of concurrent RESETs" part could be a good side effect. After all, the
dirty rings rely a lot on the userspace to do right things.. for example,
the userspace better also remember to reset before any slot changes, or
it's possible to collect a dirty pfn with a slot index that was already
removed and reused with a new one..
Maybe we could switch the sentences there in the comment of last patch, but
not a huge deal.
Reviewed-by: Peter Xu <peterx@...hat.com>
Thanks!
--
Peter Xu
Powered by blists - more mailing lists