lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e8fe8faa-72ef-8185-1a9d-dc1bbe0ae15d@jp.fujitsu.com>
Date:   Mon, 24 Jun 2019 10:34:02 +0000
From:   "qi.fuli@...itsu.com" <qi.fuli@...itsu.com>
To:     Will Deacon <will.deacon@....com>,
        "indou.takao@...itsu.com" <indou.takao@...itsu.com>
CC:     Jonathan Corbet <corbet@....net>,
        Catalin Marinas <catalin.marinas@....com>,
        "linux-doc@...r.kernel.org" <linux-doc@...r.kernel.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "linux-arm-kernel@...ts.infradead.org" 
        <linux-arm-kernel@...ts.infradead.org>,
        "qi.fuli@...itsu.com" <qi.fuli@...itsu.com>,
        "indou.takao@...itsu.com" <indou.takao@...itsu.com>,
        "peterz@...radead.org" <peterz@...radead.org>
Subject: Re: [PATCH 0/2] arm64: Introduce boot parameter to disable TLB flush
 instruction within the same inner shareable domain

Hi Will,

I am Takao's colleague, thank you very much for your reply.

On 6/18/19 2:03 AM, Will Deacon wrote:
> Hi Takao,
>
> [+Peter Z]
>
> On Mon, Jun 17, 2019 at 11:32:53PM +0900, Takao Indoh wrote:
>> From: Takao Indoh <indou.takao@...itsu.com>
>>
>> I found a performance issue related on the implementation of Linux's TLB
>> flush for arm64.
>>
>> When I run a single-threaded test program on moderate environment, it
>> usually takes 39ms to finish its work. However, when I put a small
>> apprication, which just calls mprotest() continuously, on one of sibling
>> cores and run it simultaneously, the test program slows down significantly.
>> It becomes 49ms(125%) on ThunderX2. I also detected the same problem on
>> ThunderX1 and Fujitsu A64FX.
> This is a problem for any applications that share hardware resources with
> each other, so I don't think it's something we should be too concerned about
> addressing unless there is a practical DoS scenario, which there doesn't
> appear to be in this case. It may be that the real answer is "don't call
> mprotect() in a loop".
I think there has been a misunderstanding, please let me explain.
This application is just an example using for reproducing the 
performance issue we found.
Our original purpose is reducing OS jitter by this series.
The OS jitter on massively parallel processing systems have been known 
and studied for many years.
The 2.5% OS jitter can result in over a factor of 20 slowdown for the 
same application [1].
Though it may be an extreme example, reducing the OS jitter has been an 
issue in HPC environment.

[1] Ferreira, Kurt B., Patrick Bridges, and Ron Brightwell. 
"Characterizing application sensitivity to OS interference using 
kernel-level noise injection." Proceedings of the 2008 ACM/IEEE 
conference on Supercomputing. IEEE Press, 2008.

>> I suppose the root cause of this issue is the implementation of Linux's TLB
>> flush for arm64, especially use of TLBI-is instruction which is a broadcast
>> to all processor core on the system. In case of the above situation,
>> TLBI-is is called by mprotect().
> On the flip side, Linux is providing the hardware with enough information
> not to broadcast to cores for which the remote TLBs don't have entries
> allocated for the ASID being invalidated. I would say that the root cause
> of the issue is that this filtering is not taking place.

Do you mean that the filter should be implemented in hardware?

Thanks,
Qi Fuli

>> This is not a problem for small environment, but this causes a significant
>> performance noise for large-scale HPC environment, which has more than
>> thousand nodes with low latency interconnect.
> If you have a system with over a thousand nodes, without snoop filtering
> for DVM messages and you expect performance to scale in the face of tight
> mprotect() loops then I think you have a problem irrespective of this patch.
> What happens if somebody runs I-cache invalidation in a loop?
>
>> To fix this problem, this patch adds new boot parameter
>> 'disable_tlbflush_is'.  In the case of flush_tlb_mm() *without* this
>> parameter, TLB entry is invalidated by __tlbi(aside1is, asid). By this
>> instruction, all CPUs within the same inner shareable domain check if there
>> are TLB entries which have this ASID, this causes performance noise. OTOH,
>> when this new parameter is specified, TLB entry is invalidated by
>> __tlbi(aside1, asid) only on the CPUs specified by mm_cpumask(mm).
>> Therefore TLB flush is done on minimal CPUs and performance problem does
>> not occur. Actually I confirm the performance problem is fixed by this
>> patch.
> Other than my comments above, my overall concern with this patch is that
> it introduces divergent behaviour for our TLB invalidation flow, which is
> undesirable from both maintainability and usability perspectives. If you
> wish to change the code, please don't put it behind a command-line option,
> but instead improve the code that is already there. However, I suspect that
> blowing away the local TLB on every context-switch may have hidden costs
> which are only apparent with workloads different from the contrived case
> that you're seeking to improve. You also haven't taken into account the
> effects of virtualisation, where it's likely that the hypervisor will
> upgrade non-shareable operations to inner-shareable ones anyway.
>
> Thanks,
>
> Will

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ