lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 12 Feb 2020 15:26:24 +0000
From:   Catalin Marinas <catalin.marinas@....com>
To:     "qi.fuli@...itsu.com" <qi.fuli@...itsu.com>
Cc:     Andrea Arcangeli <aarcange@...hat.com>,
        Will Deacon <will@...nel.org>,
        Jon Masters <jcm@...masters.org>,
        Rafael Aquini <aquini@...hat.com>,
        Mark Salter <msalter@...hat.com>,
        "linux-mm@...ck.org" <linux-mm@...ck.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "linux-arm-kernel@...ts.infradead.org" 
        <linux-arm-kernel@...ts.infradead.org>
Subject: Re: [PATCH 2/2] arm64: tlb: skip tlbi broadcast for single threaded
 TLB flushes

On Wed, Feb 12, 2020 at 02:13:56PM +0000, qi.fuli@...itsu.com wrote:
> On 2/4/20 5:17 AM, Andrea Arcangeli wrote:
> > With multiple NUMA nodes and multiple sockets, the tlbi broadcast
> > shall be delivered through the interconnects in turn increasing the
> > interconnect traffic and the latency of the tlbi broadcast instruction.
> > 
> > Even within a single NUMA node the latency of the tlbi broadcast
> > instruction increases almost linearly with the number of CPUs trying to
> > send tlbi broadcasts at the same time.
> > 
> > When the process is single threaded however we can achieve full SMP
> > scalability by skipping the tlbi broadcasting. Other arches already
> > deploy this optimization.
> > 
> > After the local TLB flush this however means the ASID context goes out
> > of sync in all CPUs except the local one. This can be tracked in the
> > mm_cpumask(mm): if the bit is set it means the asid context is stale
> > for that CPU. This results in an extra local ASID TLB flush only if a
> > single threaded process is migrated to a different CPU and only after a
> > TLB flush. No extra local TLB flush is needed for the common case of
> > single threaded processes context scheduling within the same CPU and for
> > multithreaded processes.
> > 
> > Skipping the tlbi instruction broadcasting is already implemented in
> > local_flush_tlb_all(), this patch only extends it to flush_tlb_mm(),
> > flush_tlb_range() and flush_tlb_page() too.
> > 
> > Here's the result of 32 CPUs (ARMv8 Ampere) running mprotect at the same
> > time from 32 single threaded processes before the patch:
> > 
> >   Performance counter stats for './loop' (3 runs):
> > 
> >                   0      dummy
> > 
> >            2.121353 +- 0.000387 seconds time elapsed  ( +-  0.02% )
> > 
> > and with the patch applied:
> > 
> >   Performance counter stats for './loop' (3 runs):
> > 
> >                   0      dummy
> > 
> >           0.1197750 +- 0.0000827 seconds time elapsed  ( +-  0.07% )
> 
> I have tested this patch on thunderX2 with Himeno benchmark[1] with 
> LARGE calculation size. Here are the results.
> 
>    w/o patch:   MFLOPS : 1149.480174
>    w/  patch:   MFLOPS : 1110.653003
> 
> In order to validate the effectivness of the patch, I ran a 
> single-threded program, which calls mprotect() in a loop to issue the 
> tlbi broadcast instruction on a CPU core. At the same time, I ran Himeno 
> benchmark on another CPU core. The results are:
> 
>    w/o patch:   MFLOPS :  860.238792
>    w/  patch:   MFLOPS : 1110.449666
> 
> Though Himeno benchmark is a microbenchmark, I hope it helps.

It doesn't really help. What if you have a two-thread program calling
mprotect() in a loop? IOW, how is this relevant to real-world scenarios?

Thanks.

-- 
Catalin

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ