[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZR2Y34hFpLmCYsUr@google.com>
Date: Wed, 4 Oct 2023 09:54:55 -0700
From: Sean Christopherson <seanjc@...gle.com>
To: Tyler Stachecki <stachecki.tyler@...il.com>
Cc: Leonardo Bras <leobras@...hat.com>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
Dave Hansen <dave.hansen@...ux.intel.com>, x86@...nel.org,
Paolo Bonzini <pbonzini@...hat.com>,
Shuah Khan <shuah@...nel.org>,
Nathan Chancellor <nathan@...nel.org>,
Nick Desaulniers <ndesaulniers@...gle.com>,
linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
linux-kselftest@...r.kernel.org, llvm@...ts.linux.dev
Subject: Re: [PATCH 0/5] KVM: x86: Fix breakage in KVM_SET_XSAVE's ABI
On Wed, Oct 04, 2023, Tyler Stachecki wrote:
> On Wed, Oct 04, 2023 at 07:51:17AM -0700, Sean Christopherson wrote:
> > It's not about removing features. The change you're asking for is to have KVM
> > *silently* drop data. Aside from the fact that such a change would break KVM's
> > ABI, silently ignoring data that userspace has explicitly requested be loaded for
> > a vCPU is incredibly dangerous.
>
> Sorry if it came off that way
No need to apologise, you got bit by a nasty kernel bug and are trying to find a
solution. There's nothing wrong with that.
> I fully understand and am resigned to the "you
> break it, you keep both halves" nature of what I had initially proposed and
> that it is not a generally tractable solution.
Yeah, the crux of the matter is that we have no control or even knowledge of who
all is using KVM, with what userspace VMM, on what hardware, etc. E.g. if this
bug were affecting our fleet and for some reason we couldn't address the problem
in userspace, carrying a hack in KVM in our internal kernel would probably be a
viable option because we can do a proper risk assessment. E.g. we know and control
exactly what userspace we're running, the underlying hardware in affected pools,
what features are exposed to the guest, etc. And we could revert the hack once
all affected VMs had been sanitized.
Powered by blists - more mailing lists