lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:   Fri,  8 Jul 2022 00:52:38 +1200
From:   Barry Song <21cnbao@...il.com>
To:     akpm@...ux-foundation.org, linux-mm@...ck.org,
        linux-arm-kernel@...ts.infradead.org, x86@...nel.org,
        catalin.marinas@....com, will@...nel.org, linux-doc@...r.kernel.org
Cc:     corbet@....net, arnd@...db.de, linux-kernel@...r.kernel.org,
        darren@...amperecomputing.com, yangyicong@...ilicon.com,
        huzhanyuan@...o.com, lipeifeng@...o.com, zhangshiming@...o.com,
        guojian@...o.com, realmz6@...il.com, Barry Song <21cnbao@...il.com>
Subject: [PATCH 0/4] mm: arm64: bring up BATCHED_UNMAP_TLB_FLUSH

Though ARM64 has the hardware to do tlb shootdown, it is not free.
A simplest micro benchmark shows even on snapdragon 888 with only
8 cores, the overhead for ptep_clear_flush is huge even for paging
out one page mapped by only one process:
5.36%  a.out    [kernel.kallsyms]  [k] ptep_clear_flush

While pages are mapped by multiple processes or HW has more CPUs,
the cost should become even higher due to the bad scalability of
tlb shootdown.

This patchset leverages the existing BATCHED_UNMAP_TLB_FLUSH by
1. only send tlbi instructions in the first stage -
	arch_tlbbatch_add_mm()
2. wait for the completion of tlbi by dsb while doing tlbbatch
	sync in arch_tlbbatch_flush()
My testing on snapdragon shows the overhead of ptep_clear_flush
is removed by the patchset. The micro benchmark becomes 5% faster
even for one page mapped by single process on snapdragon 888.

While believing the micro benchmark in 4/4 will perform better
on arm64 servers, I don't have a hardware to test. Thus,
Hi Yicong,
Would you like to run the same test in 4/4 on Kunpeng920?
Hi Darren,
Would you like to run the same test in 4/4 on Ampere's ARM64 server?
Remember to enable zRAM swap device so that pageout can actually
work for the micro benchmark.
thanks!

Barry Song (4):
  Revert "Documentation/features: mark BATCHED_UNMAP_TLB_FLUSH doesn't
    apply to ARM64"
  mm: rmap: Allow platforms without mm_cpumask to defer TLB flush
  mm: rmap: Extend tlbbatch APIs to fit new platforms
  arm64: support batched/deferred tlb shootdown during page reclamation

 Documentation/features/arch-support.txt       |  1 -
 .../features/vm/TLB/arch-support.txt          |  2 +-
 arch/arm64/Kconfig                            |  1 +
 arch/arm64/include/asm/tlbbatch.h             | 12 +++++++++++
 arch/arm64/include/asm/tlbflush.h             | 13 ++++++++++++
 arch/x86/include/asm/tlbflush.h               |  4 +++-
 mm/rmap.c                                     | 21 +++++++++++++------
 7 files changed, 45 insertions(+), 9 deletions(-)
 create mode 100644 arch/arm64/include/asm/tlbbatch.h

-- 
2.25.1

Powered by blists - more mailing lists