[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191126143657.GA9395@willie-the-truck>
Date: Tue, 26 Nov 2019 14:36:58 +0000
From: Will Deacon <will@...nel.org>
To: Matthias Brugger <mbrugger@...e.com>
Cc: "qi.fuli@...itsu.com" <qi.fuli@...itsu.com>,
"tokamoto@...fujitsu.com" <tokamoto@...fujitsu.com>,
Jon Masters <jcm@...masters.org>,
Jonathan Corbet <corbet@....net>,
"peterz@...radead.org" <peterz@...radead.org>,
Catalin Marinas <catalin.marinas@....com>,
"linux-doc@...r.kernel.org" <linux-doc@...r.kernel.org>,
Will Deacon <will.deacon@....com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"maeda.naoaki@...itsu.com" <maeda.naoaki@...itsu.com>,
"misono.tomohiro@...itsu.com" <misono.tomohiro@...itsu.com>,
Itaru Kitayama <itaru.kitayama@...il.com>,
"linux-arm-kernel@...ts.infradead.org"
<linux-arm-kernel@...ts.infradead.org>,
"indou.takao@...itsu.com" <indou.takao@...itsu.com>,
Robert Richter <rrichter@...vell.com>
Subject: Re: [PATCH 0/2] arm64: Introduce boot parameter to disable TLB flush
instruction within the same inner shareable domain
On Tue, Nov 26, 2019 at 03:26:48PM +0100, Matthias Brugger wrote:
> On 01/11/2019 18:28, Will Deacon wrote:
> > On Fri, Nov 01, 2019 at 09:56:05AM +0000, qi.fuli@...itsu.com wrote:
> >> First of all thanks for the comments for the patch.
> >>
> >> I'm still struggling with this problem to find out the solution.
> >> As a result of an investigation on this problem, after all, I think it
> >> is necessary to improve TLB flush mechanism of the kernel to fix this
> >> problem completely.
> >>
> >> So, I'd like to restart a discussion. At first, I summarize this problem
> >> to recall what was the problem and then I want to discuss how to fix it.
> >>
> >> Summary of the problem:
> >> A few months ago I proposed patches to solve a performance problem due
> >> to TLB flush.[1]
> >>
> >> A problem is that TLB flush on a core affects all other cores even if
> >> all other cores do not need actual flush, and it causes performance
> >> degradation.
> >>
> >> In this thread, I explained that:
> >> * I found a performance problem which is caused by TLBI-is instruction.
> >> * The problem occurs like this:
> >> 1) On a core, OS tries to flush TLB using TLBI-is instruction
> >> 2) TLBI-is instruction causes a broadcast to all other cores, and
> >> each core received hard-wired signal
> >> 3) Each core check if there are TLB entries which have the specified
> >> ASID/VA
> >
> > For those following along at home, my understanding is that this "check"
> > effectively stalls the pipeline as though it is being performed in software.
> >
> > Some questions:
> >
> > Does this mean a malicious virtual machine can effectively DoS the system?
> > What about a malicious application calling mprotect()?
> >
> > Do all broadcast TLBI instructions cause this expensive check, or are
> > some significantly slower than others?
> >
> >> 4) This check causes performance degradation
> >> * We ran FWQ[2] and detected OS jitter due to this problem, this noise
> >> is serious for HPC usage.
> >>
> >> The noise means here a difference between maximum time and minimum time
> >> which the same work takes.
> >>
> >> How to fix:
> >> I think the cause is TLB flush by TLBI-is because the instruction
> >> affects cores that are not related to its flush.
> >
> > Does broadcast I-cache maintenance cause the same problem?
> >
> >> So the previous patch I posted is
> >> * Use mm_cpumask in mm_struct to find appropriate CPUs for TLB flush
> >> * Exec TLBI instead of TLBI-is only to CPUs specified by mm_cpumask
> >> (This is the same behavior as arm32 and x86)
> >>
> >> And after the discussion about this patch, I got the following comments.
> >> 1) This patch switches the behavior (original flush by TLBI-is and new
> >> flush by TLBI) by boot parameter, this implementation is not acceptable
> >> due to bad maintainability.
> >> 2) Even if this patch fixes this problem, it may cause another
> >> performance problem.
> >>
> >> I'd like to start over the implementation by considering these points.
> >> For the second comment above, I will run a benchmark test to analyze the
> >> impact on performance.
> >> Please let me know if there are other points I should take into
> >> consideration.
> >
> > I think it's worth bearing in mind that I have little sympathy for the
> > problem that you are seeing. As far as I can tell, you've done the
> > following:
> >
> > 1. You designed a CPU micro-architecture that stalls whenever it receives
> > a TLB invalidation request.
> >
> > 2. You integrated said CPU design into a system where broadcast TLB
> > invalidation is not filtered and therefore stalls every CPU every
> > time that /any/ TLB invalidation is broadcast.
> >
> > 3. You deployed a mixture of Linux and jitter-sensitive software on
> > this system, and now you're failing to meet your performance
> > requirements.
> >
> > Have I got that right?
> >
> > If so, given that your CPU design isn't widely available, nobody else
> > appears to have made this mistake and jitter hasn't been reported as an
> > issue for any other systems, it's very unlikely that we're going to make
> > invasive upstream kernel changes to support you. I'm sorry, but all I can
> > suggest is that you check that your micro-architecture and performance
> > requirements are aligned with the design of Linux *before* building another
> > machine like this in future.
> >
>
> I just wanted to note that the cover letter states that they have also seen this
> on Thunderx1 and Thunderx2.
>
> Not sure about other machines, like the Huawei TaiShan 200 series.
>
> What I want to say, it seems not to be something that only affects Fujitsu but
> also other vendors. So maybe we should consider adding an erratum like the one
> for the repeated TLBI on Qualcomm SoCs.
Careful here -- we're talking about a reported performance issue, not a
correctness one. The "repeated TLBI" sequence is very much a workaround for
the latter.
In the case of TX1/TX2, I can imagine the "let's sit in a loop of mprotect()
calls" scaling poorly, which is what the cover letter is referring to, but
that's not really a workload that we need to optimise for. However, the case
that Fujitsu are reporting seems to go beyond that because of the design of
their CPU micro-architecture, where even just a single TLB invalidation
message stalls all of the other CPUs in the system. I don't have any reason
to believe that particular problem affects other CPU designs.
Thanks,
Will
Powered by blists - more mailing lists