[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <0bedae59-5397-9cae-3c2a-66bc376f5616@oracle.com>
Date: Wed, 18 Nov 2020 11:29:52 +0100
From: Alexandre Chartre <alexandre.chartre@...cle.com>
To: David Laight <David.Laight@...LAB.COM>,
Borislav Petkov <bp@...en8.de>
Cc: "tglx@...utronix.de" <tglx@...utronix.de>,
"mingo@...hat.com" <mingo@...hat.com>,
"hpa@...or.com" <hpa@...or.com>, "x86@...nel.org" <x86@...nel.org>,
"dave.hansen@...ux.intel.com" <dave.hansen@...ux.intel.com>,
"luto@...nel.org" <luto@...nel.org>,
"peterz@...radead.org" <peterz@...radead.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"thomas.lendacky@....com" <thomas.lendacky@....com>,
"jroedel@...e.de" <jroedel@...e.de>,
"konrad.wilk@...cle.com" <konrad.wilk@...cle.com>,
"jan.setjeeilers@...cle.com" <jan.setjeeilers@...cle.com>,
"junaids@...gle.com" <junaids@...gle.com>,
"oweisse@...gle.com" <oweisse@...gle.com>,
"rppt@...ux.vnet.ibm.com" <rppt@...ux.vnet.ibm.com>,
"graf@...zon.de" <graf@...zon.de>,
"mgross@...ux.intel.com" <mgross@...ux.intel.com>,
"kuzuno@...il.com" <kuzuno@...il.com>
Subject: Re: [RFC][PATCH v2 00/21] x86/pti: Defer CR3 switch to C code
On 11/18/20 10:30 AM, David Laight wrote:
> From: Alexandre Chartre
>> Sent: 18 November 2020 07:42
>>
>>
>> On 11/17/20 10:26 PM, Borislav Petkov wrote:
>>> On Tue, Nov 17, 2020 at 07:12:07PM +0100, Alexandre Chartre wrote:
>>>> Some benchmarks are available, in particular from phoronix:
>>>
>>> What I was expecting was benchmarks *you* have run which show that
>>> perf penalty, not something one can find quickly on the internet and
>>> something one cannot always reproduce her-/himself.
>>>
>>> You do know that presenting convincing numbers with a patchset greatly
>>> improves its chances of getting it upstreamed, right?
>>>
>>
>> Well, it looks like I wrongfully assume that KPTI was a well known performance
>> overhead since it was introduced (because it adds extra page-table switches),
>> but you are right I should be presenting my own numbers.
>
> IIRC the penalty comes from the page table switch.
> Doing it at a different time is unlikely to make much difference.
>
Correct, this RFC is not changing the overhead. However, it is a step forward
for being able to execute some selected syscalls or interrupt handlers without
switching to the kernel page-table. The next step would be to identify and add
the necessary mapping to the user page-table so that specified syscalls can be
executed without switching the page-table.
> For some workloads the penalty is massive - getting on for 50%.
> We are still using old kernels on AWS.
>
Here are some micro benchmarks of the getppid and getpid syscalls which highlight
the PTI overhead. This uses the kernel tools/perf command, and the getpid command
from libMICRO (https://github.com/redhat-performance/libMicro):
system running 5.10-rc4 booted with nopti:
------------------------------------------
# perf bench syscall basic
# Running 'syscall/basic' benchmark:
# Executed 10000000 getppid() calls
Total time: 0.792 [sec]
0.079223 usecs/op
12622549 ops/sec
# getpid -B 100000
prc thr usecs/call samples errors cnt/samp
getpid 1 1 0.08029 102 0 100000
We can see that getpid and getppid syscall have the same execution
time around 0.08 usecs. These syscalls are very small and just return
a value, so the time is mostly spent entering/exiting the kernel.
same system booted with pti:
----------------------------
# perf bench syscall basic
# Running 'syscall/basic' benchmark:
# Executed 10000000 getppid() calls
Total time: 2.025 [sec]
0.202527 usecs/op
4937605 ops/sec
# getpid -B 100000
prc thr usecs/call samples errors cnt/samp
getpid 1 1 0.20241 102 0 100000
With PTI, the execution time jumps to 0.20 usecs (+0.12 usecs = +150%).
That's a very extreme case because these are very small syscalls, and
in that case the overhead to switch page-tables is significant compared
to the execution time of the syscall.
So with an overhead of +0.12 usecs per syscall, the PTI impact is significant
with workload which uses a lot of short syscalls. But if you use longer syscalls,
for example with an average execution time of 2.0 usecs per syscall then you
have a lower overhead of 6%.
alex.
Powered by blists - more mailing lists