lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:   Mon,  3 Feb 2020 15:17:43 -0500
From:   Andrea Arcangeli <aarcange@...hat.com>
To:     Will Deacon <will@...nel.org>,
        Catalin Marinas <catalin.marinas@....com>,
        Jon Masters <jcm@...masters.org>,
        Rafael Aquini <aquini@...hat.com>,
        Mark Salter <msalter@...hat.com>
Cc:     linux-kernel@...r.kernel.org, linux-mm@...ck.org,
        linux-arm-kernel@...ts.infradead.org
Subject: [RFC] [PATCH 0/2] arm64: tlb: skip tlbi broadcast for single threaded TLB flushes

Hello everyone,

I've been testing the arm64 ARMv8 tlbi broadcast instruction and it
seems it doesn't scale in SMP, which opens the door for using similar
tricks to what alpha, ia64, mips, powerpc, sh, sparc are doing to
optimize TLB flushes for single threaded processes. This should be
even more beneficial in NUMA or multi socket systems where the "ASID"
and "vaddr" information has to cross a longer distance before the tlbi
broadcast instruction can be retired.

This mm_users logic seems non standard across different arches: every
arch does it in its own way. Not only the implementation is different,
but different arches are also trying to optimize different cases
through the mm_users checks in the arch code:

1) avoiding remote TLB flushes with mm_users <= 1

2) avoiding even local TLB flushes during exit_mmap->unmap_vmas when
   mm_users == 0

For now I only tried to implement 1) on arm64, but I'm left wondering
which other arches can achieve 2) and in turn which kernel code could
write to the very userland virtual addresses being unmapped during
exit_mmap, that would strictly require their flushing using the tlb
gather mechanism.

This is just a RFC to know if this is would be a viable optimization.
A basic microbenchmark is included in the commit header of the patch.

Thanks,
Andrea

Andrea Arcangeli (2):
  mm: use_mm: fix for arches checking mm_users to optimize TLB flushes
  arm64: tlb: skip tlbi broadcast for single threaded TLB flushes

 arch/arm64/include/asm/efi.h         |  2 +-
 arch/arm64/include/asm/mmu.h         |  3 +-
 arch/arm64/include/asm/mmu_context.h | 10 +--
 arch/arm64/include/asm/tlbflush.h    | 91 +++++++++++++++++++++++++++-
 arch/arm64/mm/context.c              | 13 +++-
 mm/mmu_context.c                     |  2 +
 6 files changed, 111 insertions(+), 10 deletions(-)

Some examples of the scattered status of 2) follows:

ia64:

==
flush_tlb_mm (struct mm_struct *mm)
{
	if (!mm)
		return;

	set_bit(mm->context, ia64_ctx.flushmap);
	mm->context = 0;

	if (atomic_read(&mm->mm_users) == 0)
		return;		/* happens as a result of exit_mmap() */
[..]
==

sparc:

==
void flush_tlb_all(void)
{
	/*
	 * Don't bother flushing if this address space is about to be
	 * destroyed.
	 */
	if (atomic_read(&current->mm->mm_users) == 0)
		return;
[..]
static void fix_range(struct mm_struct *mm, unsigned long start_addr,
		      unsigned long end_addr, int force)
{
	/*
	 * Don't bother flushing if this address space is about to be
	 * destroyed.
	 */
	if (atomic_read(&mm->mm_users) == 0)
		return;
[..]
==

arc:

==
noinline void local_flush_tlb_mm(struct mm_struct *mm)
{
	/*
	 * Small optimisation courtesy IA64
	 * flush_mm called during fork,exit,munmap etc, multiple times as well.
	 * Only for fork( ) do we need to move parent to a new MMU ctxt,
	 * all other cases are NOPs, hence this check.
	 */
	if (atomic_read(&mm->mm_users) == 0)
		return;
[..]
==

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ