lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20221109145156.84714-1-pbonzini@redhat.com>
Date:   Wed,  9 Nov 2022 09:51:45 -0500
From:   Paolo Bonzini <pbonzini@...hat.com>
To:     linux-kernel@...r.kernel.org, kvm@...r.kernel.org
Cc:     thomas.lendacky@....com, jmattson@...gle.com, seanjc@...gle.com
Subject: [PATCH v3 00/11] KVM: SVM: fixes for vmentry code

This series comprises two related fixes:

- the FILL_RETURN_BUFFER macro in -next needs to access percpu data,
  hence the GS segment base needs to be loaded before FILL_RETURN_BUFFER.
  This means moving guest vmload/vmsave and host vmload to assembly
  (patches 8 and 9).

- because AMD wants the OS to set STIBP to 1 before executing the
  return thunk (un)training sequence, IA32_SPEC_CTRL must be restored
  before UNTRAIN_RET, too.  This must also be moved to assembly and,
  for consistency, the guest SPEC_CTRL is also loaded in there
  (patch 10).

Neither is particularly hard, however because of 32-bit systems one needs
to keep the number of arguments to __svm_vcpu_run to three or fewer.
Therefore, patches 2 to 7 move various accesses to the vcpu_svm struct
and to percpu data to vmenter.S, cleaning up various bits along the way
to keep the assembly code nice.  I think the code is simpler than before
after these prerequisites, and even at the end of the series it is not
much harder to follow despite doing a lot more stuff.  Care has been
taken to keep the "normal" and SEV-ES code as similar as possible,
even though the latter would not hit the three argument barrier; even
the prototype is the same.

The above summary leaves patch 1, which introduces a separate asm-offsets.c
file for KVM, so that kernel/asm-offsets.c does not have to do ugly includes
with ../ paths; and patch 11, which is just more dead code removal.

Thanks,

Paolo

v2->v3: store SME-adjusted save_area physical address in svm_cpu_data,
	access it with PER_CPU_VAR instead of an argument [Sean, sort of]

	split out-of-line spec-ctrl restore macro in two parts [Sean]

	adjust comments to point out clobbered registers [Sean]

Paolo Bonzini (11):
  KVM: x86: use a separate asm-offsets.c file
  KVM: SVM: replace regs argument of __svm_vcpu_run with vcpu_svm
  KVM: SVM: adjust register allocation for __svm_vcpu_run
  KVM: SVM: retrieve VMCB from assembly
  KVM: SVM: remove unused field from struct vcpu_svm
  KVM: SVM: remove dead field from struct svm_cpu_data
  KVM: SVM: do not allocate struct svm_cpu_data dynamically
  KVM: SVM: move guest vmsave/vmload back to assembly
  KVM: SVM: restore host save area from assembly
  KVM: SVM: move MSR_IA32_SPEC_CTRL save/restore to assembly
  x86, KVM: remove unnecessary argument to x86_virt_spec_ctrl and
    callers

 arch/x86/include/asm/spec-ctrl.h |  10 +-
 arch/x86/kernel/asm-offsets.c    |   6 -
 arch/x86/kernel/cpu/bugs.c       |  15 +-
 arch/x86/kvm/.gitignore          |   2 +
 arch/x86/kvm/Makefile            |  12 ++
 arch/x86/kvm/kvm-asm-offsets.c   |  29 ++++
 arch/x86/kvm/svm/sev.c           |   4 +-
 arch/x86/kvm/svm/svm.c           | 105 +++++--------
 arch/x86/kvm/svm/svm.h           |  11 +-
 arch/x86/kvm/svm/svm_ops.h       |   5 -
 arch/x86/kvm/svm/vmenter.S       | 260 +++++++++++++++++++++++++------
 arch/x86/kvm/vmx/vmenter.S       |   2 +-
 12 files changed, 305 insertions(+), 156 deletions(-)
 create mode 100644 arch/x86/kvm/.gitignore
 create mode 100644 arch/x86/kvm/kvm-asm-offsets.c

-- 
2.31.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ