lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YXGVwlNxaibZAnmC@google.com>
Date:   Thu, 21 Oct 2021 16:30:58 +0000
From:   Sean Christopherson <seanjc@...gle.com>
To:     "Maciej S. Szmigiero" <mail@...iej.szmigiero.name>
Cc:     Paolo Bonzini <pbonzini@...hat.com>,
        Vitaly Kuznetsov <vkuznets@...hat.com>,
        Wanpeng Li <wanpengli@...cent.com>,
        Jim Mattson <jmattson@...gle.com>,
        Igor Mammedov <imammedo@...hat.com>,
        Marc Zyngier <maz@...nel.org>,
        James Morse <james.morse@....com>,
        Julien Thierry <julien.thierry.kdev@...il.com>,
        Suzuki K Poulose <suzuki.poulose@....com>,
        Huacai Chen <chenhuacai@...nel.org>,
        Aleksandar Markovic <aleksandar.qemu.devel@...il.com>,
        Paul Mackerras <paulus@...abs.org>,
        Christian Borntraeger <borntraeger@...ibm.com>,
        Janosch Frank <frankja@...ux.ibm.com>,
        David Hildenbrand <david@...hat.com>,
        Cornelia Huck <cohuck@...hat.com>,
        Claudio Imbrenda <imbrenda@...ux.ibm.com>,
        Joerg Roedel <joro@...tes.org>, kvm@...r.kernel.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH v5 12/13] KVM: Optimize gfn lookup in kvm_zap_gfn_range()

On Thu, Oct 21, 2021, Maciej S. Szmigiero wrote:
> On 21.10.2021 01:47, Sean Christopherson wrote:
> > In this case, I would honestly just drop the helper.  It's really hard to express
> > what this function does in a name that isn't absurdly long, and there's exactly
> > one user at the end of the series.
> 
> The "upper bound" is a common name for a binary search operation that
> finds the first node that has its key strictly greater than the searched key.

Ah, that I did not know (obviously).  But I suspect that detail will be lost on
other readers as well, even if they are familiar with the terminology.

> It can be integrated into its caller but I would leave a comment there
> describing what kind of operation that block of code does to aid in
> understanding the code.

Yeah, completely agree a comment would be wonderful.

> Although, to be honest, I don't quite get the reason for doing this
> considering that you want to put a single "rb_next()" call into its own
> helper for clarity below.

The goal is to make the macro itself easy to understand, even if the reader may
not understand the underlying details.  The bare rb_next() forces the reader to
pause to think about exactly what "node" is, and perhaps even dive into the code
for the other helpers.

With something like this, a reader that doesn't know the memslots details can
get a good idea of the basic gist of the macro without having to even know the
type of "node".  Obviously someone writing code will need to know the type, but
for readers bouncing around code it's a detail they don't need to know.

#define kvm_for_each_memslot_in_gfn_range(node, slots, start, end)	\
	for (node = kvm_get_first_node(slots, start);			\
	     !kvm_is_valid_node(slots, node, end);			\
	     node = kvm_get_next_node(node))

Hmm, on that point, having the caller do

	memslot = container_of(node, struct kvm_memory_slot, gfn_node[idx]);

is more than a bit odd, and as is the case with the bare rb_next(), bleeds
implementation details into code that really doesn't care about implementation
details.  Eww, and looking closer, the caller also needs to grab slots->node_idx.

So while I would love to avoid an opaque iterator, adding one would be a net
positive in this case.  E.g.

/* Iterator used for walking memslots that overlap a gfn range. */
struct kvm_memslot_iterator iter {
        struct rb_node *node;
        struct kvm_memory_slot *memslot;
        struct kvm_memory_slots *slots;
	gfn_t start;
	gfn_t end;
} 

static inline void kvm_memslot_iter_start(struct kvm_memslot_iter *iter,
					  struct kvm_memslots *slots,
					  gfn_t start, gfn_t end)
{
	...
}

static inline bool kvm_memslot_iter_is_valid(struct kvm_memslot_iter *iter)
{
	/*
	 * If this slot starts beyond or at the end of the range so does
	 * every next one
	 */
	return iter->node && iter->memslot->base_gfn < end;
}

static inline void kvm_memslot_iter_next(struct kvm_memslot_iter *iter)
{
	iter->node = rb_next(iter->node);

	if (!iter->node)
		return;

	iter->memslot = container_of(iter->node, struct kvm_memory_slot,
				     gfn_node[iter->slots->node_idx]);
}

/* Iterate over each memslot *possibly* intersecting [start, end) range */
#define kvm_for_each_memslot_in_gfn_range(iter, node, slots, start, end) \
	for (kvm_memslot_iter_start(iter, node, slots, start, end);	 \
	     kvm_memslot_iter_is_valid(iter);				 \
	     kvm_memslot_iter_next(node))				 \


Ugh, this got me looking at kvm_zap_gfn_range(), and that thing is trainwreck.
There are three calls kvm_flush_remote_tlbs_with_address(), two of which should
be unnecessary, but become necessary because the last one is broken.  *sigh*

That'd also be a good excuse to extract the rmap loop to a separate helper.  Then
you don't need to constantly juggle the 80 char limit and variable collisions
while you're modifying this mess.  I'll post the attached patches separately
since the first one (two?) should go into 5.15.  They're compile tested only
at this point, but hopefully I've had enough coffee and they're safe to base
this series on top (note, they're based on kvm/queue, commit 73f122c4f06f ("KVM:
cleanup allocation of rmaps and page tracking data").

> > The kvm_for_each_in_gfn prefix is _really_ confusing.  I get that these are all
> > helpers for "kvm_for_each_memslot...", but it's hard not to think these are all
> > iterators on their own.  I would gladly sacrifice namespacing for readability in
> > this case.
> 
> "kvm_for_each_memslot_in_gfn_range" was your proposed name here:
> https://lore.kernel.org/kvm/YK6GWUP107i5KAJo@google.com/
> 
> But no problem renaming it.

Oh, I was commenting on the inner helpers.  The macro name itself is great. ;-)

> > @@ -882,12 +875,16 @@ struct rb_node *kvm_for_each_in_gfn_first(struct kvm_memslots *slots, gfn_t star
> >   	return node;
> >   }
> > 
> > -static inline
> > -bool kvm_for_each_in_gfn_no_more(struct kvm_memslots *slots, struct rb_node *node, gfn_t end)
> > +static inline bool kvm_is_last_node(struct kvm_memslots *slots,
> > +				    struct rb_node *node, gfn_t end)
> 
> kvm_is_last_node() is a bit misleading since this function is supposed
> to return true even on the last node, only returning false one node past
> the last one (or when the tree runs out of nodes).

Good point.  I didn't love the name when I suggested either.  What about
kvm_is_valid_node()?

> >   {
> >   	struct kvm_memory_slot *memslot;
> > 
> > -	memslot = container_of(node, struct kvm_memory_slot, gfn_node[slots->node_idx]);
> > +	if (!node)
> > +		return true;
> > +
> > +	memslot = container_of(node, struct kvm_memory_slot,
> > +			       gfn_node[slots->node_idx]);
> 
> You previously un-wrapped such lines, like for example in
> https://lore.kernel.org/kvm/YK2GjzkWvjBcCFxn@google.com/ :

Heh, yes, the balance between "too long" and "hard to read" is subjective.  The
ones I usually let run over are cases where it's a short word on the end, the
overrun is only a couple chars, the statement is the sole line of an if/else
statement, there's a null/validity check immediately following etc...

All that said, I don't have a strong opinion on this one, I'm a-ok if you want to
let it run over.

> > > +		slot = container_of(node, struct kvm_memory_slot,
> > > +				    gfn_node[idxactive]);
> > 
> > With 'idx', this can go on a single line.  It runs over by two chars, but the 80
> > char limit is a soft limit, and IMO avoiding line breaks for things like this
> > improves readability.
> 
> 
> > 
> >   	/*
> >   	 * If this slot starts beyond or at the end of the range so does
> > @@ -896,11 +893,16 @@ bool kvm_for_each_in_gfn_no_more(struct kvm_memslots *slots, struct rb_node *nod
> >   	return memslot->base_gfn >= end;
> >   }
> > 
> > +static inline bool kvm_get_next_node(struct rb_node *node)
> > +{
> > +	return rb_next(node)
> > +}
> > +
> >   /* Iterate over each memslot *possibly* intersecting [start, end) range */
> >   #define kvm_for_each_memslot_in_gfn_range(node, slots, start, end)	\
> > -	for (node = kvm_for_each_in_gfn_first(slots, start);		\
> > -	     node && !kvm_for_each_in_gfn_no_more(slots, node, end);	\
> > -	     node = rb_next(node))					\
> > +	for (node = kvm_get_first_node(slots, start);			\
> > +	     !kvm_is_last_node(slots, node, end);			\
> > +	     node = kvm_get_next_node(node))				\
> > 
> >   /*
> >    * KVM_SET_USER_MEMORY_REGION ioctl allows the following operations:
> > --
> > 
> 
> Thanks,
> Maciej

View attachment "0001-KVM-x86-mmu-Drop-a-redundant-broken-remote-TLB-flush.patch" of type "text/x-diff" (1553 bytes)

View attachment "0002-KVM-x86-mmu-Drop-a-redundant-remote-TLB-flush-in-kvm.patch" of type "text/x-diff" (1425 bytes)

View attachment "0003-KVM-x86-mmu-Extract-zapping-of-rmaps-for-gfn-range-t.patch" of type "text/x-diff" (2947 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ