[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201102183359.GE21563@linux.intel.com>
Date: Mon, 2 Nov 2020 10:33:59 -0800
From: Sean Christopherson <sean.j.christopherson@...el.com>
To: Andy Lutomirski <luto@...nel.org>
Cc: Tao Xu <tao3.xu@...el.com>, Paolo Bonzini <pbonzini@...hat.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Wanpeng Li <wanpengli@...cent.com>,
Jim Mattson <jmattson@...gle.com>,
Joerg Roedel <joro@...tes.org>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
"H. Peter Anvin" <hpa@...or.com>, X86 ML <x86@...nel.org>,
kvm list <kvm@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>,
Xiaoyao Li <xiaoyao.li@...el.com>
Subject: Re: [PATCH] KVM: VMX: Enable Notify VM exit
On Mon, Nov 02, 2020 at 10:01:16AM -0800, Andy Lutomirski wrote:
> On Mon, Nov 2, 2020 at 9:31 AM Sean Christopherson
> <sean.j.christopherson@...el.com> wrote:
> >
> > On Mon, Nov 02, 2020 at 08:43:30AM -0800, Andy Lutomirski wrote:
> > > On Sun, Nov 1, 2020 at 10:14 PM Tao Xu <tao3.xu@...el.com> wrote:
> > > > 2. Another patch to disable interception of #DB and #AC when notify
> > > > VM-Exiting is enabled.
> > >
> > > Whoa there.
> > >
> > > A VM control that says "hey, CPU, if you messed up and livelocked for
> > > a long time, please break out of the loop" is not a substitute for
> > > fixing the livelocks. So I don't think you get do disable
> > > interception of #DB and #AC.
> >
> > I think that can be incorporated into a module param, i.e. let the platform
> > owner decide which tool(s) they want to use to mitigate the legacy architecture
> > flaws.
>
> What's the point? Surely the kernel should reliably mitigate the
> flaw, and the kernel should decide how to do so.
IMO, setting a reasonably low threshold _is_ mitigating such flaws. E.g. it's
entirely possible, if not likely, that we can push the threshold below various
ENCLS instruction latencies. Now I'm curious as to how exactly the accounting
is done under the hood, e.g. I assume retiring uops of a massive instruction is
enough to reset the timer, but I haven't actually read the specs in detail.
If userspace is truly malicious, it can easily spawn new VMs/processes to carry
out its attack, e.g. exiting to userspace on these VM-Exits effectively
throttles userspace as much as straight killing the process.
> >
> > > I also think you should print a loud warning
> >
> > I'm not so sure on this one, e.g. userspace could just spin up a new instance
> > if its malicious guest and spam the kernel log.
>
> pr_warn_once()?
Or ratelimited. My point was that a straight WARN would be less than ideal.
> If this triggers, it's a *bug*, right? Kernel or CPU.
Sort of? Many (all?) of the known of the scenarios that can trigger this exit
are unlikely to ever be fixed in silicon. I'm not saying they shouldn't be
fixed, just that practically speaking they are highly unlikely to be fixed
anytime soon. The infinite #DB/#AC recursion flaws are inarguably dumb CPU
behavior, but there are other scenarious that are less cut and dried, i.e. may
not be fixable without non-trivial tradeoffs.
> > > and have some intelligent handling when this new exit triggers.
> >
> > We discussed something similar in the context of the new bus lock VM-Exit. I
> > don't know that it makes sense to try and add intelligence into the kernel.
> > In many use cases, e.g. clouds, the userspace VMM is trusted (inasmuch as
> > userspace can be trusted), while the guest is completely untrusted. Reporting
> > the error to userspace and letting the userspace stack take action is likely
> > preferable to doing something fancy in the kernel.
> >
> >
> > Tao, this patch should probably be tagged RFC, at least until we can experiment
> > with the threshold on real silicon. KVM and kernel behavior may depend on the
> > accuracy of detecting actual attacks, e.g. if we can set a threshold that has
> > zero false negatives and near-zero false postives, then it probably makes sense
> > to be more assertive in how such VM-Exits are reported and logged.
>
> If you can actually find a threshold that reliably mitigates the bug
> and does not allow a guest to cause undesirably large latency in the
> host, then fine. 1/10 if a tick is way too long, I think.
Yes, this was my internal review feedback as well. Either that got lost along
the way or I wasn't clear enough in stating what should be used as a placeholder
until we have silicon in hand.
Powered by blists - more mailing lists