[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <17d26bd6-44c5-7972-fe95-544f061feb5f@redhat.com>
Date: Thu, 8 Jul 2021 19:24:07 +0200
From: Paolo Bonzini <pbonzini@...hat.com>
To: Maxim Levitsky <mlevitsk@...hat.com>, kvm@...r.kernel.org
Cc: Joerg Roedel <joro@...tes.org>, Borislav Petkov <bp@...en8.de>,
Sean Christopherson <seanjc@...gle.com>,
"maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)" <x86@...nel.org>,
"H. Peter Anvin" <hpa@...or.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Thomas Gleixner <tglx@...utronix.de>,
linux-kernel@...r.kernel.org, Jim Mattson <jmattson@...gle.com>,
Wanpeng Li <wanpengli@...cent.com>,
Ingo Molnar <mingo@...hat.com>
Subject: Re: [PATCH 0/3] KVM: SMM fixes
On 07/07/21 14:50, Maxim Levitsky wrote:
> Hi!
>
> I did first round of testing of SMM by flooding the guest with SMIs,
> and running nested guests in it, and I found out that SMM
> breaks nested KVM due to a refactoring change
> that was done in 5.12 kernel. Fix for this is in patch 1.
>
> I also fixed another issue I noticed in this patch which is purely
> theoretical but nevertheless should be fixed. This is patch 2.
>
> I also propose to add (mostly for debug for now) a module param
> that can make the KVM to avoid intercepting #SMIs on SVM.
> (Intel doesn't have such intercept I think)
> The default is still to intercept #SMI so nothing is changed by
> default.
>
> This allows to test the case in which SMI are not intercepted,
> by L1 without running Windows (which doesn't intercept #SMI).
>
> In addition to that I found out that on bare metal, at least
> on two Zen2 machines I have, the CPU ignores SMI interception and
> never VM exits when SMI is received. As I guessed earlier
> this must have been done for security reasons.
>
> Note that bug that I fixed in patch 1, should crash VMs very soon
> on bare metal as well, if the CPU were to honour the SMI intercept.
> as long as there are some SMIs generated while the system is running.
>
> I tested this on bare metal by using local APIC to send SMIs
> to all real CPUs, and also used ioport 0xB2 to send SMIs.
> In both cases my system slowed to a crawl but didn't show
> any SMI vmexits (SMI intercept was enabled).
>
> In a VM I also used ioport 0xB2 to generate a flood of SMIs,
> which allowed me to reproduce this bug (and with intercept_smi=0
> module parameter I can reproduce the bug that Vitaly fixed in
> his series as well while just running nested KVM).
>
> Note that while doing nested migration I am still able to cause
> severe hangs of the L1 when I run the SMI stress test in L1
> and a nested VM. VM isn't fully hung but its GUI stops responding,
> and I see lots of cpu lockups errors in dmesg.
> This seems to happen regardless of #SMI interception in the L1
> (with Vitaly's patches applied of course)
>
> Best regards,
> Maxim Levitsky
>
> Maxim Levitsky (3):
> KVM: SVM: #SMI interception must not skip the instruction
> KVM: SVM: remove INIT intercept handler
> KVM: SVM: add module param to control the #SMI interception
>
> arch/x86/kvm/svm/nested.c | 4 ++++
> arch/x86/kvm/svm/svm.c | 18 +++++++++++++++---
> arch/x86/kvm/svm/svm.h | 1 +
> 3 files changed, 20 insertions(+), 3 deletions(-)
>
Queued, thanks.
Paolo
Powered by blists - more mailing lists