lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20210924163152.289027-1-pbonzini@redhat.com>
Date:   Fri, 24 Sep 2021 12:31:21 -0400
From:   Paolo Bonzini <pbonzini@...hat.com>
To:     linux-kernel@...r.kernel.org, kvm@...r.kernel.org
Cc:     dmatlack@...gle.com, seanjc@...gle.com
Subject: [PATCH v3 00/31] KVM: x86: pass arguments on the page fault path via struct kvm_page_fault

The current kvm page fault handlers passes around many arguments to the
functions.  To simplify those arguments and local variables, introduce
a data structure, struct kvm_page_fault, to hold those arguments and
variables.  struct kvm_page_fault is allocated on stack on the caller
of kvm fault handler, kvm_mmu_do_page_fault(), and passed around.

Later in the series, my patches are interleaved with David's work to
add the memory slot to the struct and avoid repeated lookups.  Along the
way you will find some cleanups of functions with a ludicrous number of
arguments, so that they use struct kvm_page_fault as much as possible
or at least receive related information from a single argument.  make_spte
in particular goes from 11 to 10 arguments (yeah I know) despite gaining
two for kvm_mmu_page and kvm_memory_slot.

This can be sometimes a bit debatable (for example struct kvm_mmu_page
is used a little more on the TDP MMU paths), but overall I think the
result is an improvement.  For example the SET_SPTE_* constants go
away, and they absolutely didn't belong in the TDP MMU.  But if you
disagree with some of the changes, please speak up loudly!

Testing: survives kvm-unit-tests on Intel with all of ept=0, ept=1
tdp_mmu=0, ept=1.  Will do more before committing to it in kvm/next of
course.

Paolo

David Matlack (5):
  KVM: x86/mmu: Fold rmap_recycle into rmap_add
  KVM: x86/mmu: Pass the memslot around via struct kvm_page_fault
  KVM: x86/mmu: Avoid memslot lookup in page_fault_handle_page_track
  KVM: x86/mmu: Avoid memslot lookup in rmap_add
  KVM: x86/mmu: Avoid memslot lookup in make_spte and
    mmu_try_to_unsync_pages

Paolo Bonzini (25):
  KVM: MMU: pass unadulterated gpa to direct_page_fault
  KVM: MMU: Introduce struct kvm_page_fault
  KVM: MMU: change mmu->page_fault() arguments to kvm_page_fault
  KVM: MMU: change direct_page_fault() arguments to kvm_page_fault
  KVM: MMU: change page_fault_handle_page_track() arguments to
    kvm_page_fault
  KVM: MMU: change kvm_faultin_pfn() arguments to kvm_page_fault
  KVM: MMU: change handle_abnormal_pfn() arguments to kvm_page_fault
  KVM: MMU: change __direct_map() arguments to kvm_page_fault
  KVM: MMU: change FNAME(fetch)() arguments to kvm_page_fault
  KVM: MMU: change kvm_tdp_mmu_map() arguments to kvm_page_fault
  KVM: MMU: change tdp_mmu_map_handle_target_level() arguments to
    kvm_page_fault
  KVM: MMU: change fast_page_fault() arguments to kvm_page_fault
  KVM: MMU: change kvm_mmu_hugepage_adjust() arguments to kvm_page_fault
  KVM: MMU: change disallowed_hugepage_adjust() arguments to
    kvm_page_fault
  KVM: MMU: change tracepoints arguments to kvm_page_fault
  KVM: MMU: mark page dirty in make_spte
  KVM: MMU: unify tdp_mmu_map_set_spte_atomic and
    tdp_mmu_set_spte_atomic_no_dirty_log
  KVM: MMU: inline set_spte in mmu_set_spte
  KVM: MMU: inline set_spte in FNAME(sync_page)
  KVM: MMU: clean up make_spte return value
  KVM: MMU: remove unnecessary argument to mmu_set_spte
  KVM: MMU: set ad_disabled in TDP MMU role
  KVM: MMU: pass kvm_mmu_page struct to make_spte
  KVM: MMU: pass struct kvm_page_fault to mmu_set_spte
  KVM: MMU: make spte an in-out argument in make_spte

Sean Christopherson (1):
  KVM: x86/mmu: Verify shadow walk doesn't terminate early in page
    faults

 arch/x86/include/asm/kvm_host.h       |   4 +-
 arch/x86/include/asm/kvm_page_track.h |   4 +-
 arch/x86/kvm/mmu.h                    |  84 +++++-
 arch/x86/kvm/mmu/mmu.c                | 408 +++++++++++---------------
 arch/x86/kvm/mmu/mmu_internal.h       |  22 +-
 arch/x86/kvm/mmu/mmutrace.h           |  18 +-
 arch/x86/kvm/mmu/page_track.c         |   6 +-
 arch/x86/kvm/mmu/paging_tmpl.h        | 137 +++++----
 arch/x86/kvm/mmu/spte.c               |  29 +-
 arch/x86/kvm/mmu/spte.h               |  14 +-
 arch/x86/kvm/mmu/tdp_mmu.c            | 123 +++-----
 arch/x86/kvm/mmu/tdp_mmu.h            |   4 +-
 12 files changed, 390 insertions(+), 463 deletions(-)

-- 
2.27.0

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ