[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200123230125.GA24211@linux.intel.com>
Date: Thu, 23 Jan 2020 15:01:25 -0800
From: Sean Christopherson <sean.j.christopherson@...el.com>
To: Jim Mattson <jmattson@...gle.com>
Cc: Paolo Bonzini <pbonzini@...hat.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
linmiaohe <linmiaohe@...wei.com>, kvm list <kvm@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>,
the arch/x86 maintainers <x86@...nel.org>,
Radim Krčmář <rkrcmar@...hat.com>,
Wanpeng Li <wanpengli@...cent.com>,
Joerg Roedel <joro@...tes.org>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
"H . Peter Anvin" <hpa@...or.com>
Subject: Re: [PATCH] KVM: nVMX: set rflags to specify success in
handle_invvpid() default case
On Thu, Jan 23, 2020 at 10:22:24AM -0800, Jim Mattson wrote:
> On Thu, Jan 23, 2020 at 1:54 AM Paolo Bonzini <pbonzini@...hat.com> wrote:
> >
> > On 23/01/20 10:45, Vitaly Kuznetsov wrote:
> > >>> SDM says that "If an
> > >>> unsupported INVVPID type is specified, the instruction fails." and this
> > >>> is similar to INVEPT and I decided to check what handle_invept()
> > >>> does. Well, it does BUG_ON().
> > >>>
> > >>> Are we doing the right thing in any of these cases?
> > >>
> > >> Yes, both INVEPT and INVVPID catch this earlier.
> > >>
> > >> So I'm leaning towards not applying Miaohe's patch.
> > >
> > > Well, we may at least want to converge on BUG_ON() for both
> > > handle_invvpid()/handle_invept(), there's no need for them to differ.
> >
> > WARN_ON_ONCE + nested_vmx_failValid would probably be better, if we
> > really want to change this.
> >
> > Paolo
>
> In both cases, something is seriously wrong. The only plausible
> explanations are compiler error or hardware failure. It would be nice
> to handle *all* such failures with a KVM_INTERNAL_ERROR exit to
> userspace. (I'm also thinking of situations like getting a VM-exit for
> INIT.)
Ya. Vitaly and I had a similar discussion[*]. The idea we tossed around
was to also mark the VM as having encountered a KVM/hardware bug so that
the VM is effectively dead. That would also allow gracefully handling bugs
that are detected deep in the stack, i.e. can't simply return 0 to get out
to userspace.
[*] https://lkml.kernel.org/r/20190930153358.GD14693@linux.intel.com
Powered by blists - more mailing lists