lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aC2Z1X0tcJiAMWSV@yzhao56-desk.sh.intel.com>
Date: Wed, 21 May 2025 17:16:05 +0800
From: Yan Zhao <yan.y.zhao@...el.com>
To: Sean Christopherson <seanjc@...gle.com>
CC: Paolo Bonzini <pbonzini@...hat.com>, <kvm@...r.kernel.org>,
	<linux-kernel@...r.kernel.org>, Peter Xu <peterx@...hat.com>, Maxim Levitsky
	<mlevitsk@...hat.com>, Binbin Wu <binbin.wu@...ux.intel.com>, James Houghton
	<jthoughton@...gle.com>, Pankaj Gupta <pankaj.gupta@....com>
Subject: Re: [PATCH v3 5/6] KVM: Use mask of harvested dirty ring entries to
 coalesce dirty ring resets

On Fri, May 16, 2025 at 02:35:39PM -0700, Sean Christopherson wrote:
> Use "mask" instead of a dedicated boolean to track whether or not there
> is at least one to-be-reset entry for the current slot+offset.  In the
> body of the loop, mask is zero only on the first iteration, i.e. !mask is
> equivalent to first_round.
> 
> Opportunistically combine the adjacent "if (mask)" statements into a single
> if-statement.
> 
> No functional change intended.
> 
> Cc: Peter Xu <peterx@...hat.com>
> Cc: Yan Zhao <yan.y.zhao@...el.com>
> Cc: Maxim Levitsky <mlevitsk@...hat.com>
> Reviewed-by: Pankaj Gupta <pankaj.gupta@....com>
> Reviewed-by: James Houghton <jthoughton@...gle.com>
> Signed-off-by: Sean Christopherson <seanjc@...gle.com>
> ---
>  virt/kvm/dirty_ring.c | 60 +++++++++++++++++++++----------------------
>  1 file changed, 29 insertions(+), 31 deletions(-)
> 
> diff --git a/virt/kvm/dirty_ring.c b/virt/kvm/dirty_ring.c
> index 84c75483a089..54734025658a 100644
> --- a/virt/kvm/dirty_ring.c
> +++ b/virt/kvm/dirty_ring.c
> @@ -121,7 +121,6 @@ int kvm_dirty_ring_reset(struct kvm *kvm, struct kvm_dirty_ring *ring,
>  	u64 cur_offset, next_offset;
>  	unsigned long mask = 0;
>  	struct kvm_dirty_gfn *entry;
> -	bool first_round = true;
>  
>  	while (likely((*nr_entries_reset) < INT_MAX)) {
>  		if (signal_pending(current))
> @@ -141,42 +140,42 @@ int kvm_dirty_ring_reset(struct kvm *kvm, struct kvm_dirty_ring *ring,
>  		ring->reset_index++;
>  		(*nr_entries_reset)++;
>  
> -		/*
> -		 * While the size of each ring is fixed, it's possible for the
> -		 * ring to be constantly re-dirtied/harvested while the reset
> -		 * is in-progress (the hard limit exists only to guard against
> -		 * wrapping the count into negative space).
> -		 */
> -		if (!first_round)
> +		if (mask) {
> +			/*
> +			 * While the size of each ring is fixed, it's possible
> +			 * for the ring to be constantly re-dirtied/harvested
> +			 * while the reset is in-progress (the hard limit exists
> +			 * only to guard against the count becoming negative).
> +			 */
>  			cond_resched();
>  
> -		/*
> -		 * Try to coalesce the reset operations when the guest is
> -		 * scanning pages in the same slot.
> -		 */
> -		if (!first_round && next_slot == cur_slot) {
> -			s64 delta = next_offset - cur_offset;
> +			/*
> +			 * Try to coalesce the reset operations when the guest
> +			 * is scanning pages in the same slot.
> +			 */
> +			if (next_slot == cur_slot) {
> +				s64 delta = next_offset - cur_offset;
>  
> -			if (delta >= 0 && delta < BITS_PER_LONG) {
> -				mask |= 1ull << delta;
> -				continue;
> -			}
> +				if (delta >= 0 && delta < BITS_PER_LONG) {
> +					mask |= 1ull << delta;
> +					continue;
> +				}
>  
> -			/* Backwards visit, careful about overflows!  */
> -			if (delta > -BITS_PER_LONG && delta < 0 &&
> -			    (mask << -delta >> -delta) == mask) {
> -				cur_offset = next_offset;
> -				mask = (mask << -delta) | 1;
> -				continue;
> +				/* Backwards visit, careful about overflows! */
> +				if (delta > -BITS_PER_LONG && delta < 0 &&
> +				(mask << -delta >> -delta) == mask) {
> +					cur_offset = next_offset;
> +					mask = (mask << -delta) | 1;
> +					continue;
> +				}
>  			}
> -		}
>  
> -		/*
> -		 * Reset the slot for all the harvested entries that have been
> -		 * gathered, but not yet fully processed.
> -		 */
> -		if (mask)
> +			/*
> +			 * Reset the slot for all the harvested entries that
> +			 * have been gathered, but not yet fully processed.
> +			 */
>  			kvm_reset_dirty_gfn(kvm, cur_slot, cur_offset, mask);
Nit and feel free to ignore it :)

Would it be better to move the "cond_resched()" to here, i.e., executing it for
at most every 64 entries?

> +		}
>  
>  		/*
>  		 * The current slot was reset or this is the first harvested
> @@ -185,7 +184,6 @@ int kvm_dirty_ring_reset(struct kvm *kvm, struct kvm_dirty_ring *ring,
>  		cur_slot = next_slot;
>  		cur_offset = next_offset;
>  		mask = 1;
> -		first_round = false;
>  	}
>  
>  	/*
> -- 
> 2.49.0.1112.g889b7c5bd8-goog
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ