lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20221108151532.1377783-1-pbonzini@redhat.com>
Date:   Tue,  8 Nov 2022 10:15:24 -0500
From:   Paolo Bonzini <pbonzini@...hat.com>
To:     linux-kernel@...r.kernel.org, kvm@...r.kernel.org
Cc:     nathan@...nel.org, thomas.lendacky@....com,
        andrew.cooper3@...rix.com, peterz@...radead.org,
        jmattson@...gle.com, seanjc@...gle.com
Subject: [PATCH v2 0/8] KVM: SVM: fixes for vmentry code

This series comprises two related fixes:

- the FILL_RETURN_BUFFER macro in -next needs to access percpu data,
  hence the GS segment base needs to be loaded before FILL_RETURN_BUFFER.
  This means moving guest vmload/vmsave and host vmload to assembly
  (patches 5 and 6).

- because AMD wants the OS to set STIBP to 1 before executing the
  return thunk (un)training sequence, IA32_SPEC_CTRL must be restored
  before UNTRAIN_RET, too.  This must also be moved to assembly and,
  for consistency, the guest SPEC_CTRL is also loaded in there
  (patch 7).

Neither is particularly hard, however because of 32-bit systems one needs
to keep the number of arguments to __svm_vcpu_run to three or fewer.
One is taken for whether IA32_SPEC_CTRL is intercepted, and one for the
host save area, so all accesses to the vcpu_svm struct have to be done
from assembly too.  This is done in patches 2 to 4, and it turns out
not to be that bad; in fact I think the code is simpler than before
after these prerequisites, and even at the end of the series it is not
much harder to follow despite doing a lot more stuff.  Care has been
taken to keep the "normal" and SEV-ES code as similar as possible,
even though the latter would not hit the three argument barrier.

The above summary leaves out the more mundane patches 1 and 8.  The
former introduces a separate asm-offsets.c file for KVM, so that
kernel/asm-offsets.c does not have to do ugly includes with ../ paths.
The latter is dead code removal.

Thanks,

Paolo

v1->v2: use a separate asm-offsets.c file instead of hacking around
	the arch/x86/kvm/svm/svm.h file; this could have been done
	also with just a "#ifndef COMPILE_OFFSETS", but Sean's
	suggestion is cleaner and there is a precedent in
	drivers/memory/ for private asm-offsets files

	keep preparatory cleanups together at the beginning of the
	series

	move SPEC_CTRL save/restore out of line [Jim]

Paolo Bonzini (8):
  KVM: x86: use a separate asm-offsets.c file
  KVM: SVM: replace regs argument of __svm_vcpu_run with vcpu_svm
  KVM: SVM: adjust register allocation for __svm_vcpu_run
  KVM: SVM: retrieve VMCB from assembly
  KVM: SVM: move guest vmsave/vmload to assembly
  KVM: SVM: restore host save area from assembly
  KVM: SVM: move MSR_IA32_SPEC_CTRL save/restore to assembly
  x86, KVM: remove unnecessary argument to x86_virt_spec_ctrl and
    callers

 arch/x86/include/asm/spec-ctrl.h |  10 +-
 arch/x86/kernel/asm-offsets.c    |   6 -
 arch/x86/kernel/cpu/bugs.c       |  15 +-
 arch/x86/kvm/Makefile            |  12 ++
 arch/x86/kvm/kvm-asm-offsets.c   |  28 ++++
 arch/x86/kvm/svm/svm.c           |  53 +++----
 arch/x86/kvm/svm/svm.h           |   4 +-
 arch/x86/kvm/svm/svm_ops.h       |   5 -
 arch/x86/kvm/svm/vmenter.S       | 241 ++++++++++++++++++++++++-------
 arch/x86/kvm/vmx/vmenter.S       |   2 +-
 10 files changed, 259 insertions(+), 117 deletions(-)
 create mode 100644 arch/x86/kvm/kvm-asm-offsets.c

-- 
2.31.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ