[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAATStaOf_VN4BYBccSBk2bOpgvFwzH0xW=PncnyNo+SC41GXDw@mail.gmail.com>
Date: Wed, 20 Jan 2021 23:28:52 +1100
From: "Anand K. Mistry" <amistry@...gle.com>
To: Borislav Petkov <bp@...en8.de>
Cc: x86@...nel.org, Anthony Steinhauser <asteinhauser@...gle.com>,
tglx@...utronix.de, Joel Fernandes <joelaf@...gle.com>,
Alexandre Chartre <alexandre.chartre@...cle.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Andy Lutomirski <luto@...nel.org>,
Dave Hansen <dave.hansen@...ux.intel.com>,
"H. Peter Anvin" <hpa@...or.com>, Ingo Molnar <mingo@...hat.com>,
Josh Poimboeuf <jpoimboe@...hat.com>,
Julien Thierry <jthierry@...hat.com>,
Maciej Fijalkowski <maciej.fijalkowski@...el.com>,
Mark Gross <mgross@...ux.intel.com>,
Mike Rapoport <rppt@...nel.org>,
Paolo Bonzini <pbonzini@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Tony Luck <tony.luck@...el.com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [RFC PATCH] x86/speculation: Add finer control for when to issue IBPB
> >
> > Signed-off-by: Anand K Mistry <amistry@...gle.com>
> > Signed-off-by: Anand K Mistry <amistry@...omium.org>
>
> Two SoBs by you, why?
Tooling issues probably. Not intentional.
>
> > ---
> > Background:
> > IBPB is slow on some CPUs.
> >
> > More detailed background:
> > On some CPUs, issuing an IBPB can cause the address space switch to be
> > 10x more expensive (yes, 10x, not 10%).
>
> Which CPUs are those?!
AMD A4-9120C. Probably the A6-9220C too, but I don't have one of those
machines to test with,
>
> > On a system that makes heavy use of processes, this can cause a very
> > significant performance hit.
>
> You're not really trying to convince reviewers for why you need to add
> more complexity to an already too complex and confusing code. "some
> CPUs" and "can cause" is not good enough.
On a simple ping-ping test between two processes (using a pair of
pipes), a process switch is ~7us with IBPB disabled. But with it
enabled, it increases to around 80us (tested with the powersave CPU
governor).
On Chrome's IPC system, a perftest running 50,000 ping-pong messages:
without IBPB 5579.49 ms
with IBPB 21396 ms
(~4x difference)
And, doing video playback in the browser (which is already very
optimised), the IBPB hit turns out to be ~2.5% of CPU cycles. Doing a
webrtc video call (tested using http://appr.tc), it's ~9% of CPU
cycles. I don't have exact numbers, but it's worse on some real VC
apps.
>
> > I understand this is likely to be very contentious. Obviously, this
> > isn't ready for code review, but I'm hoping to get some thoughts on the
> > problem and this approach.
>
> Yes, in the absence of hard performance data, I'm not convinced at all.
With this change, I can get a >80% reduction in CPU cycles consumed by
IBPB. A video call on my test device goes from ~9% to ~0.80% cycles
used by IBPB. It doesn't sound like much, but it's a significant
difference on these devices.
Powered by blists - more mailing lists