[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YmuqifsJltdh7rpv@localhost.localdomain>
Date: Fri, 29 Apr 2022 17:06:17 +0800
From: Tao Liu <ltao@...hat.com>
To: Joerg Roedel <joro@...tes.org>
Cc: x86@...nel.org, kvm@...r.kernel.org,
Peter Zijlstra <peterz@...radead.org>,
Dave Hansen <dave.hansen@...ux.intel.com>,
virtualization@...ts.linux-foundation.org,
Arvind Sankar <nivedita@...m.mit.edu>, hpa@...or.com,
Jiri Slaby <jslaby@...e.cz>,
David Rientjes <rientjes@...gle.com>,
Masami Hiramatsu <mhiramat@...nel.org>,
Martin Radev <martin.b.radev@...il.com>,
Tom Lendacky <thomas.lendacky@....com>,
Joerg Roedel <jroedel@...e.de>,
Kees Cook <keescook@...omium.org>,
Cfir Cohen <cfir@...gle.com>, linux-coco@...ts.linux.dev,
Andy Lutomirski <luto@...nel.org>,
Dan Williams <dan.j.williams@...el.com>,
Juergen Gross <jgross@...e.com>,
Mike Stunes <mstunes@...are.com>,
Sean Christopherson <seanjc@...gle.com>,
kexec@...ts.infradead.org, linux-kernel@...r.kernel.org,
Eric Biederman <ebiederm@...ssion.com>,
Erdem Aktas <erdemaktas@...gle.com>
Subject: Re: [PATCH v3 00/10] x86/sev: KEXEC/KDUMP support for SEV-ES guests
On Thu, Jan 27, 2022 at 11:10:34AM +0100, Joerg Roedel wrote:
> From: Joerg Roedel <jroedel@...e.de>
>
> Hi,
>
> here are changes to enable kexec/kdump in SEV-ES guests. The biggest
> problem for supporting kexec/kdump under SEV-ES is to find a way to
> hand the non-boot CPUs (APs) from one kernel to another.
>
> Without SEV-ES the first kernel parks the CPUs in a HLT loop until
> they get reset by the kexec'ed kernel via an INIT-SIPI-SIPI sequence.
> For virtual machines the CPU reset is emulated by the hypervisor,
> which sets the vCPU registers back to reset state.
>
> This does not work under SEV-ES, because the hypervisor has no access
> to the vCPU registers and can't make modifications to them. So an
> SEV-ES guest needs to reset the vCPU itself and park it using the
> AP-reset-hold protocol. Upon wakeup the guest needs to jump to
> real-mode and to the reset-vector configured in the AP-Jump-Table.
>
> The code to do this is the main part of this patch-set. It works by
> placing code on the AP Jump-Table page itself to park the vCPU and for
> jumping to the reset vector upon wakeup. The code on the AP Jump Table
> runs in 16-bit protected mode with segment base set to the beginning
> of the page. The AP Jump-Table is usually not within the first 1MB of
> memory, so the code can't run in real-mode.
>
> The AP Jump-Table is the best place to put the parking code, because
> the memory is owned, but read-only by the firmware and writeable by
> the OS. Only the first 4 bytes are used for the reset-vector, leaving
> the rest of the page for code/data/stack to park a vCPU. The code
> can't be in kernel memory because by the time the vCPU wakes up the
> memory will be owned by the new kernel, which might have overwritten it
> already.
>
> The other patches add initial GHCB Version 2 protocol support, because
> kexec/kdump need the MSR-based (without a GHCB) AP-reset-hold VMGEXIT,
> which is a GHCB protocol version 2 feature.
>
> The kexec'ed kernel is also entered via the decompressor and needs
> MMIO support there, so this patch-set also adds MMIO #VC support to
> the decompressor and support for handling CLFLUSH instructions.
>
> Finally there is also code to disable kexec/kdump support at runtime
> when the environment does not support it (e.g. no GHCB protocol
> version 2 support or AP Jump Table over 4GB).
>
> The diffstat looks big, but most of it is moving code for MMIO #VC
> support around to make it available to the decompressor.
>
> The previous version of this patch-set can be found here:
>
> https://lore.kernel.org/lkml/20210913155603.28383-1-joro@8bytes.org/
>
> Please review.
>
> Thanks,
>
> Joerg
>
> Changes v2->v3:
>
> - Rebased to v5.17-rc1
> - Applied most review comments by Boris
> - Use the name 'AP jump table' consistently
> - Make kexec-disabling for unsupported guests x86-specific
> - Cleanup and consolidate patches to detect GHCB v2 protocol
> support
>
> Joerg Roedel (10):
> x86/kexec/64: Disable kexec when SEV-ES is active
> x86/sev: Save and print negotiated GHCB protocol version
> x86/sev: Set GHCB data structure version
> x86/sev: Cache AP Jump Table Address
> x86/sev: Setup code to park APs in the AP Jump Table
> x86/sev: Park APs on AP Jump Table with GHCB protocol version 2
> x86/sev: Use AP Jump Table blob to stop CPU
> x86/sev: Add MMIO handling support to boot/compressed/ code
> x86/sev: Handle CLFLUSH MMIO events
> x86/kexec/64: Support kexec under SEV-ES with AP Jump Table Blob
>
> arch/x86/boot/compressed/sev.c | 45 +-
> arch/x86/include/asm/insn-eval.h | 1 +
> arch/x86/include/asm/realmode.h | 5 +
> arch/x86/include/asm/sev-ap-jumptable.h | 29 +
> arch/x86/include/asm/sev.h | 11 +-
> arch/x86/kernel/machine_kexec_64.c | 12 +
> arch/x86/kernel/process.c | 8 +
> arch/x86/kernel/sev-shared.c | 233 +++++-
> arch/x86/kernel/sev.c | 404 +++++------
> arch/x86/lib/insn-eval-shared.c | 913 ++++++++++++++++++++++++
> arch/x86/lib/insn-eval.c | 909 +----------------------
> arch/x86/realmode/Makefile | 9 +-
> arch/x86/realmode/rm/Makefile | 11 +-
> arch/x86/realmode/rm/header.S | 3 +
> arch/x86/realmode/rm/sev.S | 85 +++
> arch/x86/realmode/rmpiggy.S | 6 +
> arch/x86/realmode/sev/Makefile | 33 +
> arch/x86/realmode/sev/ap_jump_table.S | 131 ++++
> arch/x86/realmode/sev/ap_jump_table.lds | 24 +
> 19 files changed, 1730 insertions(+), 1142 deletions(-)
> create mode 100644 arch/x86/include/asm/sev-ap-jumptable.h
> create mode 100644 arch/x86/lib/insn-eval-shared.c
> create mode 100644 arch/x86/realmode/rm/sev.S
> create mode 100644 arch/x86/realmode/sev/Makefile
> create mode 100644 arch/x86/realmode/sev/ap_jump_table.S
> create mode 100644 arch/x86/realmode/sev/ap_jump_table.lds
>
>
> base-commit: e783362eb54cd99b2cac8b3a9aeac942e6f6ac07
> --
> 2.34.1
>
Hi Joerg,
I tried the patch set with 5.17.0-rc1 kernel, and I have a few questions:
1) Is it a bug or should qemu-kvm 6.2.0 be patched with specific patch? Because
I found it will exit with 0 when I tried to reboot the VM with sev-es enabled.
However with only sev enabled, the VM can do reboot with no problem:
[root@...l-per7525-03 ~]# virsh start TW-SEV-ES --console
....
Fedora Linux 35 (Server Edition)
Kernel 5.17.0-rc1 on an x86_64 (ttyS0)
....
[root@...ora ~]# reboot
.....
[ 48.077682] reboot: Restarting system
[ 48.078109] reboot: machine restart
^^^^^^^^^^^^^^^ guest vm reached restart
[root@...l-per7525-03 ~]# echo $?
0
^^^ qemu-kvm exit with 0, no reboot back to normal VM kernel
[root@...l-per7525-03 ~]#
2) With sev-es enabled and the 2 patch sets applied: A) [PATCH v3 00/10] x86/sev:
KEXEC/KDUMP support for SEV-ES guests, and B) [PATCH v6 0/7] KVM: SVM: Add initial
GHCB protocol version 2 support. I can enable kdump and have vmcore generated:
[root@...ora ~]# dmesg|grep -i sev
[ 0.030600] SEV: Hypervisor GHCB protocol version support: min=1 max=2
[ 0.030602] SEV: Using GHCB protocol version 2
[ 0.296144] AMD Memory Encryption Features active: SEV SEV-ES
[ 0.450991] SEV: AP jump table Blob successfully set up
[root@...ora ~]# kdumpctl status
kdump: Kdump is operational
However without the 2 patch sets, I can also enable kdump and have vmcore generated:
[root@...ora ~]# dmesg|grep -i sev
[ 0.295754] AMD Memory Encryption Features active: SEV SEV-ES
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ patch set A & B
not applied, so only have this string.
[root@...ora ~]# echo c > /proc/sysrq-trigger
...
[ 2.759403] kdump[549]: saving vmcore-dmesg.txt to /sysroot/var/crash/127.0.0.1-2022-04-18-05:58:50/
[ 2.804355] kdump[555]: saving vmcore-dmesg.txt complete
[ 2.806915] kdump[557]: saving vmcore
^^^^^^^^^^^^^ vmcore can still be generated
...
[ 7.068981] reboot: Restarting system
[ 7.069340] reboot: machine restart
[root@...l-per7525-03 ~]# echo $?
0
^^^ same exit issue as question 1.
I doesn't have a complete technical background of the patch set, but isn't
it the issue which this patch set is trying to solve? Or I missed something?
Thanks,
Tao Liu
> _______________________________________________
> Virtualization mailing list
> Virtualization@...ts.linux-foundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/virtualization
Powered by blists - more mailing lists