[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <73BC3891-34DC-4EB7-BD1C-5FD312A8F18A@nutanix.com>
Date: Fri, 13 May 2022 15:21:38 +0000
From: Jon Kohler <jon@...anix.com>
To: Jim Mattson <jmattson@...gle.com>
CC: Jon Kohler <jon@...anix.com>,
Sean Christopherson <seanjc@...gle.com>,
Jonathan Corbet <corbet@....net>,
Paolo Bonzini <pbonzini@...hat.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Wanpeng Li <wanpengli@...cent.com>,
Joerg Roedel <joro@...tes.org>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
Dave Hansen <dave.hansen@...ux.intel.com>,
X86 ML <x86@...nel.org>, "H. Peter Anvin" <hpa@...or.com>,
Kees Cook <keescook@...omium.org>,
Andrea Arcangeli <aarcange@...hat.com>,
Josh Poimboeuf <jpoimboe@...hat.com>,
Kim Phillips <kim.phillips@....com>,
Lukas Bulwahn <lukas.bulwahn@...il.com>,
Peter Zijlstra <peterz@...radead.org>,
Ashok Raj <ashok.raj@...el.com>,
KarimAllah Ahmed <karahmed@...zon.de>,
David Woodhouse <dwmw@...zon.co.uk>,
"linux-doc@...r.kernel.org" <linux-doc@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>,
"kvm @ vger . kernel . org" <kvm@...r.kernel.org>,
Waiman Long <longman@...hat.com>
Subject: Re: [PATCH v4] x86/speculation, KVM: remove IBPB on vCPU load
> On May 12, 2022, at 11:50 PM, Jim Mattson <jmattson@...gle.com> wrote:
>
> On Thu, May 12, 2022 at 8:19 PM Jon Kohler <jon@...anix.com> wrote:
>>
>>
>>
>>> On May 12, 2022, at 11:06 PM, Jim Mattson <jmattson@...gle.com> wrote:
>>>
>>> On Thu, May 12, 2022 at 5:50 PM Jon Kohler <jon@...anix.com> wrote:
>>>
>>>> You mentioned if someone was concerned about performance, are you
>>>> saying they also critically care about performance, such that they are
>>>> willing to *not* use IBPB at all, and instead just use taskset and hope
>>>> nothing ever gets scheduled on there, and then hope that the hypervisor
>>>> does the job for them?
>>>
>>> I am saying that IBPB is not the only viable mitigation for
>>> cross-process indirect branch steering. Proper scheduling can also
>>> solve the problem, without the overhead of IBPB. Say that you have two
>>> security domains: trusted and untrusted. If you have a two-socket
>>> system, and you always run trusted workloads on socket#0 and untrusted
>>> workloads on socket#1, IBPB is completely superfluous. However, if the
>>> hypervisor chooses to schedule a vCPU thread from virtual socket#0
>>> after a vCPU thread from virtual socket#1 on the same logical
>>> processor, then it *must* execute an IBPB between those two vCPU
>>> threads. Otherwise, it has introduced a non-architectural
>>> vulnerability that the guest can't possibly be aware of.
>>>
>>> If you can't trust your OS to schedule tasks where you tell it to
>>> schedule them, can you really trust it to provide you with any kind of
>>> inter-process security?
>>
>> Fair enough, so going forward:
>> Should this be mandatory in all cases? How this whole effort came
>> was that a user could configure their KVM host with conditional
>> IBPB, but this particular mitigation is now always on no matter what.
>>
>> In our previous patch review threads, Sean and I mostly settled on making
>> this particular avenue active only when a user configures always_ibpb, such
>> that for cases like the one you describe (and others like it that come up in
>> the future) can be covered easily, but for cond_ibpb, we can document
>> that it doesn’t cover this case.
>>
>> Would that be acceptable here?
>
> That would make me unhappy. We use cond_ibpb, and I don't want to
> switch to always_ibpb, yet I do want this barrier.
Ok gotcha, which I think is a good point for cloud providers, since the
workload(s) are especially opaque.
How about this: I could work up a v5 patch here where this was at minimum
a system level knob (similar to other mitigation knobs) and documented
In more detail. That way folks who might want more control here have the
basic ability to do that without recompiling the kernel. Such a “knob” would
be on by default, such that there is no functional regression here.
Would that be ok with you as a middle ground?
Thanks again,
Jon
>
>>>
>>>> Would this be the expectation of just KVM? Or all hypervisors on the
>>>> market?
>>>
>>> Any hypervisor that doesn't do this is broken, but that won't keep it
>>> off the market. :-)
>>
>> Very true :)
>>
Powered by blists - more mailing lists