lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 29 Apr 2021 10:28:19 +0200
From:   Ingo Molnar <mingo@...nel.org>
To:     Linus Torvalds <torvalds@...ux-foundation.org>
Cc:     linux-kernel@...r.kernel.org, Thomas Gleixner <tglx@...utronix.de>,
        Peter Zijlstra <a.p.zijlstra@...llo.nl>,
        Borislav Petkov <bp@...en8.de>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Rik van Riel <riel@...riel.com>,
        Nadav Amit <namit@...are.com>
Subject: [GIT PULL] x86/mm changes for v5.13

Linus,

Please pull the latest x86/mm git tree from:

   git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git x86-mm-2021-04-29

   # HEAD: a500fc918f7b8dc3dff2e6c74f3e73e856c18248 Merge branch 'locking/core' into x86/mm, to resolve conflict

The x86 MM changes in this cycle were:

 - Implement concurrent TLB flushes, which overlaps the local TLB flush with the
   remote TLB flush. In testing this improved sysbench performance measurably by
   a couple of percentage points, especially if TLB-heavy security mitigations
   are active.

 - Further micro-optimizations to improve the performance of TLB flushes.

 Thanks,

	Ingo

------------------>
Nadav Amit (9):
      smp: Run functions concurrently in smp_call_function_many_cond()
      x86/mm/tlb: Unify flush_tlb_func_local() and flush_tlb_func_remote()
      x86/mm/tlb: Open-code on_each_cpu_cond_mask() for tlb_is_not_lazy()
      x86/mm/tlb: Flush remote and local TLBs concurrently
      x86/mm/tlb: Privatize cpu_tlbstate
      x86/mm/tlb: Do not make is_lazy dirty for no reason
      cpumask: Mark functions as pure
      x86/mm/tlb: Remove unnecessary uses of the inline keyword
      smp: Inline on_each_cpu_cond() and on_each_cpu()

Peter Zijlstra (1):
      smp: Micro-optimize smp_call_function_many_cond()


 arch/x86/hyperv/mmu.c                 |  10 +-
 arch/x86/include/asm/paravirt.h       |   6 +-
 arch/x86/include/asm/paravirt_types.h |   4 +-
 arch/x86/include/asm/tlbflush.h       |  48 ++++----
 arch/x86/include/asm/trace/hyperv.h   |   2 +-
 arch/x86/kernel/alternative.c         |   2 +-
 arch/x86/kernel/kvm.c                 |  11 +-
 arch/x86/kernel/paravirt.c            |   2 +-
 arch/x86/mm/init.c                    |   2 +-
 arch/x86/mm/tlb.c                     | 176 ++++++++++++++++------------
 arch/x86/xen/mmu_pv.c                 |  11 +-
 include/linux/cpumask.h               |   6 +-
 include/linux/smp.h                   |  50 +++++---
 include/trace/events/xen.h            |   2 +-
 kernel/smp.c                          | 212 ++++++++++++++--------------------
 kernel/up.c                           |  38 +-----
 16 files changed, 287 insertions(+), 295 deletions(-)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ