[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200417191159.GA14609@linux.intel.com>
Date: Fri, 17 Apr 2020 12:11:59 -0700
From: Sean Christopherson <sean.j.christopherson@...el.com>
To: Peter Xu <peterx@...hat.com>
Cc: Paolo Bonzini <pbonzini@...hat.com>, linux-kernel@...r.kernel.org,
kvm@...r.kernel.org, Vitaly Kuznetsov <vkuznets@...hat.com>
Subject: Re: [PATCH 0/3] KVM: x86: move nested-related kvm_x86_ops to a
separate struct
On Fri, Apr 17, 2020 at 03:05:53PM -0400, Peter Xu wrote:
> On Fri, Apr 17, 2020 at 12:44:10PM -0400, Paolo Bonzini wrote:
> > While this reintroduces some pointer chasing that was removed in
> > afaf0b2f9b80 ("KVM: x86: Copy kvm_x86_ops by value to eliminate layer
> > of indirection", 2020-03-31), the cost is small compared to retpolines
> > and anyway most of the callbacks are not even remotely on a fastpath.
> > In fact, only check_nested_events should be called during normal VM
> > runtime. When static calls are merged into Linux my plan is to use them
> > instead of callbacks, and that will finally make things fast again by
> > removing the retpolines.
>
> Paolo,
>
> Just out of curiousity: is there an explicit reason to not copy the
> whole kvm_x86_nested_ops but use pointers (since after all we just
> reworked kvm_x86_ops)?
Ya, my vote would be to copy by value as well. I'd also be in favor of
dropping the _ops part, e.g.
struct kvm_x86_ops {
struct kvm_x86_nested_ops nested;
...
};
and drop the "nested" parts from the ops, e.g.
check_nested_events() -> check_events()
which yields:
r = kvm_x86_ops.nested.check_events(vcpu);
if (r != 0)
return r;
I had this coded up but shelved it when svm.c got fractured :-).
Powered by blists - more mailing lists