[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20190116163521.GA32566@linux.intel.com>
Date: Wed, 16 Jan 2019 08:35:21 -0800
From: Sean Christopherson <sean.j.christopherson@...el.com>
To: Josh Poimboeuf <jpoimboe@...hat.com>
Cc: Qian Cai <cai@....pw>, Paolo Bonzini <pbonzini@...hat.com>,
rkrcmar@...hat.com, tglx@...utronix.de, mingo@...hat.com,
bp@...en8.de, hpa@...or.com, x86@...nel.org, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] kvm: add proper frame pointer logic for vmx
On Wed, Jan 16, 2019 at 08:56:59AM -0600, Josh Poimboeuf wrote:
> On Tue, Jan 15, 2019 at 04:54:38PM -0800, Sean Christopherson wrote:
> > On Tue, Jan 15, 2019 at 04:38:49PM -0600, Josh Poimboeuf wrote:
> > > On Tue, Jan 15, 2019 at 11:06:17AM -0800, Sean Christopherson wrote:
> > > > > I can see there are five options to solve it.
> > > > >
> > > > > 1) always inline vmx_vcpu_run()
> > > > > 2) always noinline vmx_vcpu_run()
> > > > > 3) add -fdiable-ipa-fnsplit option to Makefile for vmx.o
> > > > > 4) let STACK_FRAME_NON_STANDARD support part.* syntax.
> > > > > 5) trim-down vmx_vcpu_run() even more to not causing splitting by GCC.
> > > > >
> > > > > Option 1) and 2) seems give away the decision for user with
> > > > > CONFIG_CC_OPTIMIZE_FOR_(PERFORMANCE/SIZE).
> > > > >
> > > > > Option 3) prevents other functions there for splitting for optimization.
> > > > >
> > > > > Option 4) and 5) seems tricky to implement.
> > > > >
> > > > > I am not more leaning to 3) as only other fuction will miss splitting is
> > > > > vmx_segment_access_rights().
> > > >
> > > > Option 4) is the most correct, but "tricky" is an understatement. Unless
> > > > Josh is willing to pick up the task it'll likely have to wait.
> > > >
> > > > There's actually a few more options:
> > > >
> > > > 6) Replace "pop %rbp" in the vmx_vmenter() asm blob with an open-coded
> > > > equivalent, e.g. "mov [%rsp], %rbp; add $8, %rsp". This runs an end-
> > > > around on objtool since objtool explicitly keys off "pop %rbp" and NOT
> > > > "mov ..., %rbp" (which is probably an objtool checking flaw?").
> > > >
> > > > 7) Move the vmx_vmenter() asm blob and a few other lines of code into a
> > > > separate helper, e.g. __vmx_vcpu_run(), and mark that as having a
> > > > non-standard stack frame.
> > >
> > > Do you mean moving the asm blob to a .S file instead of inline asm? If
> > > so, I think that's definitely a good idea. It would be a nice cleanup,
> > > regardless of the objtool false positive.
> >
> > No, just moving the inline asm to a separate function. Moving the entire
> > inline asm blob is annoying because it references a large number of struct
> > offsets and doesn't solve the fundamental problem (more on this later).
> >
> > The VMLAUNCH and VMRESUME invocations themselves, i.e. the really nasty
> > bits, have already been moved to a .S file (by the commit that exposed
> > this warning). That approach eliminates the worst of the conflicts with
> > compiler optimizations without having to deal with exposing the struct
> > offsets to asm.
>
> Exposing all the struct offsets isn't a big deal -- that's what
> asm-offsets.c is for.
The struct in question, vcpu_vmx, is "private" to the VMX code and KVM
can technically be compiled as an external module, i.e. the struct layout
could change without asm-offsets.c being recompiled.
I have an idea to avoid this hiccup, but even if it works the end result
may still be fairly ugly. Regardless, moving the code to proper asm is
far too big of a change for v5.0.
> On the other hand, having such a large inline asm block is fragile and
> unholy IMO. I wouldn't be surprised if there are more GCC problems
> lurking.
Unholy is a perfect description.
> > > That would allow vmx_vcpu_run() to be a "normal" C function which
> > > objtool can validate (and also create ORC data for). It would also
> > > prevent future nasty GCC optimizations (which was why the __noclone was
> > > needed in the first place).
> >
> > Moving the inline asm to a separate function (on top of having a separate
> > .S file for VMLAUNCH/VMRESUME) accomplishes sort of the same thing, i.e.
> > vmx_vcpu_run() gets to be a normal function. It's not as bulletproof
> > from a tooling perspective, but odds are pretty good that all usage of
> > STACK_FRAME_NON_STANDARD will break if the compiler manages to muck up
> > the inline asm wrapper function.
>
> I wouldn't bet on that. Many/most optimizations don't change the symbol
> name, in which case STACK_FRAME_NON_STANDARD would work just fine.
By "muck up" I meant break STACK_FRAME_NON_STANDARD, i.e. if gcc breaks
objtool for __vmx_vcpu_run() then it'll likely break objtool across the
board.
> > And having a dedicated function for VMLAUNCH/VMRESUME is actually nice
> > from a stack trace perspective. The transition to/from the guest is by
> > far the most likely source of faults, i.e. a stack trace that originates
> > in vmx_vmenter() all but guarantees that a fault occurred on VM-Enter or
> > immediately after VM-Exit.
>
> But moving __vmx_vcpu_run() to a proper asm function doesn't prevent
> that.
Yeah, I was trying to say that the "improved debugability" motivation for
eliminating the inline asm is basically nullified by vmx_vmenter(), but
it came out a bit backwards.
> BTW, do the stack traces from this path even work with the ORC unwinder?
> Since objtool doesn't annotate vmx_vcpu_run() (or now __vmx_vcpu_run),
> that should break stack tracing and instead produce a "guess" stack
> trace (with the question marks), where it prints all text addresses it
> finds on the stack, instead of doing a proper stack trace.
It produce guess traces.
> Which would be another reason to move this code to proper asm.
Eh, not really. In practice it doesn't matter because there is literally
a single path for reaching __vmx_vcpu_run(). And if this changes we'll
need to revisit the VM-Enter code because our VMCS.HOST_RSP optimizations
depend on host's %rsp being identical for every call (for a VM/process).
There is a second path for vmx_vmenter(), nested_vmx_check_vmentry_hw(),
but it also has a single invocation path and the unwinder gets us to the
caller of vmx_vmenter(), so again in practice not having a full stack
trace doesn't affect debugging.
> > > And also, I *think* objtool would no longer warn in that case, because
> > > there would no longer be any calls in the function after popping %rbp.
> > > Though if I'm wrong about that, I'd be glad to help fix the warning one
> > > way or another.
> >
> > In the vmx_vcpu_run() case, warning on calls after "pop %rbp" is actually
> > a false positive. The POP restores the host's RBP, i.e. the stack frame,
> > meaning all calls after the POP are ok. The window where stack traces
> > will go awry is between loading RBP with the guest's value and the POP to
> > restore the host's stack frame, i.e. in this case "mov <guest_rbp>, %rbp"
> > should trigger a warning irrespective of any calls.
> >
> > I'm not saying it's actually worth updating objtool, rather that "fixing"
> > the KVM issue by moving the inline asm to a dedicated .S file doesn't
> > solve the fundamental problem that VM-Enter/VM-Exit needs to temporarily
> > corrupt RBP.
>
> I agree the objtool warning was a false positive, but in many cases
> these false positives end up pointing out some convoluted code which
> really should be cleaned up anyway. That's what I'm proposing here.
> Moving the function to proper asm would be so much cleaner.
It'd definitely be prettier, but I think the low level transition code
will always be convoluted :)
All that being said, I do agree that eliminating the inline asm would be
a welcome change, just not for v5.0. I'll play around with the code and
see what I can come up with.
Powered by blists - more mailing lists