lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <1548966284-28642-1-git-send-email-karahmed@amazon.de>
Date:   Thu, 31 Jan 2019 21:24:30 +0100
From:   KarimAllah Ahmed <karahmed@...zon.de>
To:     x86@...nel.org, kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
        Radim Krčmář <rkrcmar@...hat.com>,
        Paolo Bonzini <pbonzini@...hat.com>
Cc:     KarimAllah Ahmed <karahmed@...zon.de>
Subject: [PATCH v6 00/14] KVM/X86: Introduce a new guest mapping interface

Guest memory can either be directly managed by the kernel (i.e. have a "struct
page") or they can simply live outside kernel control (i.e. do not have a
"struct page"). KVM mostly support these two modes, except in a few places
where the code seems to assume that guest memory must have a "struct page".

This patchset introduces a new mapping interface to map guest memory into host
kernel memory which also supports PFN-based memory (i.e. memory without 'struct
page'). It also converts all offending code to this interface or simply
read/write directly from guest memory. Patch 2 is additionally fixing an
incorrect page release and marking the page as dirty (i.e. as a side-effect of
using the helper function to write).

As far as I can see all offending code is now fixed except the APIC-access page
which I will handle in a seperate series along with dropping
kvm_vcpu_gfn_to_page and kvm_vcpu_gpa_to_page from the internal KVM API.

The current implementation of the new API uses memremap to map memory that does
not have a "struct page". This proves to be very slow for high frequency
mappings. Since this does not affect the normal use-case where a "struct page"
is available, the performance of this API will be handled by a seperate patch
series.

So the simple way to use memory outside kernel control is:

1- Pass 'mem=' in the kernel command-line to limit the amount of memory managed 
   by the kernel.
2- Map this physical memory you want to give to the guest with:
   mmap("/dev/mem", physical_address_offset, ..)
3- Use the user-space virtual address as the "userspace_addr" field in
   KVM_SET_USER_MEMORY_REGION ioctl.

v5 -> v6:
- Added one extra patch to ensure that support for this mem= case is complete
  for x86.
- Added a helper function to check if the mapping is mapped or not.
- Added more comments on the struct.
- Setting ->page to NULL on unmap and to a poison ptr if unused during map
- Checking for map ptr before using it.
- Change kvm_vcpu_unmap to also mark page dirty for LM. That requires
  passing the vCPU pointer again to this function.

v4 -> v5:
- Introduce a new parameter 'dirty' into kvm_vcpu_unmap
- A horrible rebase due to nested.c :)
- Dropped a couple of hyperv patches as the code was fixed already as a
  side-effect of another patch.
- Added a new trivial cleanup patch.

v3 -> v4:
- Rebase
- Add a new patch to also fix the newly introduced enlightned VMCS.

v2 -> v3:
- Rebase
- Add a new patch to also fix the newly introduced shadow VMCS.

Filippo Sironi (1):
  X86/KVM: Handle PFNs outside of kernel reach when touching GPTEs

KarimAllah Ahmed (13):
  X86/nVMX: handle_vmon: Read 4 bytes from guest memory
  X86/nVMX: Update the PML table without mapping and unmapping the page
  KVM: Introduce a new guest mapping API
  X86/nVMX: handle_vmptrld: Use kvm_vcpu_map when copying VMCS12 from
    guest memory
  KVM/nVMX: Use kvm_vcpu_map when mapping the L1 MSR bitmap
  KVM/nVMX: Use kvm_vcpu_map when mapping the virtual APIC page
  KVM/nVMX: Use kvm_vcpu_map when mapping the posted interrupt
    descriptor table
  KVM/X86: Use kvm_vcpu_map in emulator_cmpxchg_emulated
  KVM/nSVM: Use the new mapping API for mapping guest memory
  KVM/nVMX: Use kvm_vcpu_map for accessing the shadow VMCS
  KVM/nVMX: Use kvm_vcpu_map for accessing the enlightened VMCS
  KVM/nVMX: Use page_address_valid in a few more locations
  kvm, x86: Properly check whether a pfn is an MMIO or not

 arch/x86/include/asm/e820/api.h |   1 +
 arch/x86/kernel/e820.c          |  18 ++++-
 arch/x86/kvm/mmu.c              |   5 +-
 arch/x86/kvm/paging_tmpl.h      |  38 +++++++---
 arch/x86/kvm/svm.c              |  97 ++++++++++++------------
 arch/x86/kvm/vmx/nested.c       | 160 +++++++++++++++-------------------------
 arch/x86/kvm/vmx/vmx.c          |  19 ++---
 arch/x86/kvm/vmx/vmx.h          |   9 ++-
 arch/x86/kvm/x86.c              |  14 ++--
 include/linux/kvm_host.h        |  28 +++++++
 virt/kvm/kvm_main.c             |  64 ++++++++++++++++
 11 files changed, 267 insertions(+), 186 deletions(-)

-- 
2.7.4

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ