lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YXCqo6XXIkyOb4IE@google.com>
Date:   Wed, 20 Oct 2021 23:47:47 +0000
From:   Sean Christopherson <seanjc@...gle.com>
To:     "Maciej S. Szmigiero" <mail@...iej.szmigiero.name>
Cc:     Paolo Bonzini <pbonzini@...hat.com>,
        Vitaly Kuznetsov <vkuznets@...hat.com>,
        Wanpeng Li <wanpengli@...cent.com>,
        Jim Mattson <jmattson@...gle.com>,
        Igor Mammedov <imammedo@...hat.com>,
        Marc Zyngier <maz@...nel.org>,
        James Morse <james.morse@....com>,
        Julien Thierry <julien.thierry.kdev@...il.com>,
        Suzuki K Poulose <suzuki.poulose@....com>,
        Huacai Chen <chenhuacai@...nel.org>,
        Aleksandar Markovic <aleksandar.qemu.devel@...il.com>,
        Paul Mackerras <paulus@...abs.org>,
        Christian Borntraeger <borntraeger@...ibm.com>,
        Janosch Frank <frankja@...ux.ibm.com>,
        David Hildenbrand <david@...hat.com>,
        Cornelia Huck <cohuck@...hat.com>,
        Claudio Imbrenda <imbrenda@...ux.ibm.com>,
        Joerg Roedel <joro@...tes.org>, kvm@...r.kernel.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH v5 12/13] KVM: Optimize gfn lookup in kvm_zap_gfn_range()

On Mon, Sep 20, 2021, Maciej S. Szmigiero wrote:

Some mechanical comments while they're on my mind, I'll get back to a full review
tomorrow.

> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> index 6433efff447a..9ae5f7341cf5 100644
> --- a/include/linux/kvm_host.h
> +++ b/include/linux/kvm_host.h
> @@ -833,6 +833,75 @@ struct kvm_memory_slot *id_to_memslot(struct kvm_memslots *slots, int id)
>  	return NULL;
>  }
>  
> +static inline
> +struct rb_node *kvm_memslots_gfn_upper_bound(struct kvm_memslots *slots, gfn_t gfn)

Function attributes should go on the same line as the function unless there's a
really good reason to do otherwise.

In this case, I would honestly just drop the helper.  It's really hard to express
what this function does in a name that isn't absurdly long, and there's exactly
one user at the end of the series.

https://lkml.kernel.org/r/20210930192417.1332877-1-keescook@chromium.org

> +{
> +	int idx = slots->node_idx;
> +	struct rb_node *node, *result = NULL;
> +
> +	for (node = slots->gfn_tree.rb_node; node; ) {
> +		struct kvm_memory_slot *slot;

My personal preference is to put declarations outside of the for loop.  I find it
easier to read, it's harder to have shadowing issues if all variables are declared
at the top, especially when using relatively generic names.

> +
> +		slot = container_of(node, struct kvm_memory_slot, gfn_node[idx]);
> +		if (gfn < slot->base_gfn) {
> +			result = node;
> +			node = node->rb_left;
> +		} else

Needs braces since the "if" has braces.

> +			node = node->rb_right;
> +	}
> +
> +	return result;
> +}
> +
> +static inline
> +struct rb_node *kvm_for_each_in_gfn_first(struct kvm_memslots *slots, gfn_t start)

The kvm_for_each_in_gfn prefix is _really_ confusing.  I get that these are all
helpers for "kvm_for_each_memslot...", but it's hard not to think these are all
iterators on their own.  I would gladly sacrifice namespacing for readability in
this case.

I also wouldn't worry about capturing the details.  For most folks reading this
code, the important part is understanding the control flow of
kvm_for_each_memslot_in_gfn_range().  Capturing the under-the-hood details in the
name isn't a priority since anyone modifying this code is going to have to do a
lot of staring no matter what :-)

> +static inline
> +bool kvm_for_each_in_gfn_no_more(struct kvm_memslots *slots, struct rb_node *node, gfn_t end)
> +{
> +	struct kvm_memory_slot *memslot;
> +
> +	memslot = container_of(node, struct kvm_memory_slot, gfn_node[slots->node_idx]);
> +
> +	/*
> +	 * If this slot starts beyond or at the end of the range so does
> +	 * every next one
> +	 */
> +	return memslot->base_gfn >= end;
> +}
> +
> +/* Iterate over each memslot *possibly* intersecting [start, end) range */
> +#define kvm_for_each_memslot_in_gfn_range(node, slots, start, end)	\
> +	for (node = kvm_for_each_in_gfn_first(slots, start);		\
> +	     node && !kvm_for_each_in_gfn_no_more(slots, node, end);	\

I think it makes sense to move the NULL check into the validation helper?  We had
a similar case in KVM's legacy MMU where a "null" check was left to the caller,
and it ended up with a bunch of redundant and confusing code.  I don't see that
happening here, but at the same time it's odd for the validator to not sanity
check @node.

> +	     node = rb_next(node))					\

It's silly, but I'd add a wrapper for this one, just to make it easy to follow
the control flow.

Maybe this as delta?  I'm definitely not set on the names, was just trying to
find something that's short and to the point.

---
 include/linux/kvm_host.h | 60 +++++++++++++++++++++-------------------
 1 file changed, 31 insertions(+), 29 deletions(-)

diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 9ae5f7341cf5..a88bd5d9e4aa 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -833,36 +833,29 @@ struct kvm_memory_slot *id_to_memslot(struct kvm_memslots *slots, int id)
 	return NULL;
 }

-static inline
-struct rb_node *kvm_memslots_gfn_upper_bound(struct kvm_memslots *slots, gfn_t gfn)
+static inline struct rb_node *kvm_get_first_node(struct kvm_memslots *slots,
+						 gfn_t start)
 {
+	struct kvm_memory_slot *slot;
+	struct rb_node *node, *tmp;
 	int idx = slots->node_idx;
-	struct rb_node *node, *result = NULL;
-
-	for (node = slots->gfn_tree.rb_node; node; ) {
-		struct kvm_memory_slot *slot;
-
-		slot = container_of(node, struct kvm_memory_slot, gfn_node[idx]);
-		if (gfn < slot->base_gfn) {
-			result = node;
-			node = node->rb_left;
-		} else
-			node = node->rb_right;
-	}
-
-	return result;
-}
-
-static inline
-struct rb_node *kvm_for_each_in_gfn_first(struct kvm_memslots *slots, gfn_t start)
-{
-	struct rb_node *node;

 	/*
 	 * Find the slot with the lowest gfn that can possibly intersect with
 	 * the range, so we'll ideally have slot start <= range start
 	 */
-	node = kvm_memslots_gfn_upper_bound(slots, start);
+	node = NULL;
+	for (tmp = slots->gfn_tree.rb_node; tmp; ) {
+
+		slot = container_of(node, struct kvm_memory_slot, gfn_node[idx]);
+		if (gfn < slot->base_gfn) {
+			node = tmp;
+			tmp = tmp->rb_left;
+		} else {
+			tmp = tmp->rb_right;
+		}
+	}
+
 	if (node) {
 		struct rb_node *pnode;

@@ -882,12 +875,16 @@ struct rb_node *kvm_for_each_in_gfn_first(struct kvm_memslots *slots, gfn_t star
 	return node;
 }

-static inline
-bool kvm_for_each_in_gfn_no_more(struct kvm_memslots *slots, struct rb_node *node, gfn_t end)
+static inline bool kvm_is_last_node(struct kvm_memslots *slots,
+				    struct rb_node *node, gfn_t end)
 {
 	struct kvm_memory_slot *memslot;

-	memslot = container_of(node, struct kvm_memory_slot, gfn_node[slots->node_idx]);
+	if (!node)
+		return true;
+
+	memslot = container_of(node, struct kvm_memory_slot,
+			       gfn_node[slots->node_idx]);

 	/*
 	 * If this slot starts beyond or at the end of the range so does
@@ -896,11 +893,16 @@ bool kvm_for_each_in_gfn_no_more(struct kvm_memslots *slots, struct rb_node *nod
 	return memslot->base_gfn >= end;
 }

+static inline bool kvm_get_next_node(struct rb_node *node)
+{
+	return rb_next(node)
+}
+
 /* Iterate over each memslot *possibly* intersecting [start, end) range */
 #define kvm_for_each_memslot_in_gfn_range(node, slots, start, end)	\
-	for (node = kvm_for_each_in_gfn_first(slots, start);		\
-	     node && !kvm_for_each_in_gfn_no_more(slots, node, end);	\
-	     node = rb_next(node))					\
+	for (node = kvm_get_first_node(slots, start);			\
+	     !kvm_is_last_node(slots, node, end);			\
+	     node = kvm_get_next_node(node))				\

 /*
  * KVM_SET_USER_MEMORY_REGION ioctl allows the following operations:
--

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ