[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAHVum0dOfJ5HuscNq0tA6BnUJK34v4CPCTkD4piHc7FObZOsng@mail.gmail.com>
Date: Fri, 25 Mar 2022 17:31:19 -0700
From: Vipin Sharma <vipinsh@...gle.com>
To: David Matlack <dmatlack@...gle.com>
Cc: Paolo Bonzini <pbonzini@...hat.com>,
Sean Christopherson <seanjc@...gle.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Wanpeng Li <wanpengli@...cent.com>,
Jim Mattson <jmattson@...gle.com>,
Joerg Roedel <joro@...tes.org>, kvm list <kvm@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] KVM: x86/mmu: Speed up slot_rmap_walk_next for sparsely
populated rmaps
On Fri, Mar 25, 2022 at 4:53 PM David Matlack <dmatlack@...gle.com> wrote:
>
> On Fri, Mar 25, 2022 at 4:31 PM Vipin Sharma <vipinsh@...gle.com> wrote:
> >
> > Avoid calling handlers on empty rmap entries and skip to the next non
> > empty rmap entry.
> >
> > Empty rmap entries are noop in handlers.
> >
> > Signed-off-by: Vipin Sharma <vipinsh@...gle.com>
> > Suggested-by: Sean Christopherson <seanjc@...gle.com>
> > Change-Id: I8abf0f4d82a2aae4c5d58b80bcc17ffc30785ffc
>
> nit: Omit Change-Id tags from upstream commits.
Thanks for catching it.
>
> > ---
> > arch/x86/kvm/mmu/mmu.c | 9 ++++++---
> > 1 file changed, 6 insertions(+), 3 deletions(-)
> >
> > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> > index 51671cb34fb6..f296340803ba 100644
> > --- a/arch/x86/kvm/mmu/mmu.c
> > +++ b/arch/x86/kvm/mmu/mmu.c
> > @@ -1499,11 +1499,14 @@ static bool slot_rmap_walk_okay(struct slot_rmap_walk_iterator *iterator)
> > return !!iterator->rmap;
> > }
> >
> > -static void slot_rmap_walk_next(struct slot_rmap_walk_iterator *iterator)
> > +static noinline void
>
> What is the reason to add noinline?
My understanding is that since this method is called from
__always_inline methods, noinline will avoid gcc inlining the
slot_rmap_walk_next in those functions and generate smaller code.
Powered by blists - more mailing lists