lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 19 Apr 2022 23:22:49 +0200
From:   Thomas Gleixner <tglx@...utronix.de>
To:     Dave Hansen <dave.hansen@...el.com>,
        LKML <linux-kernel@...r.kernel.org>
Cc:     x86@...nel.org, Andrew Cooper <andrew.cooper3@...rix.com>,
        "Edgecombe, Rick P" <rick.p.edgecombe@...el.com>,
        Tom Lendacky <thomas.lendacky@....com>
Subject: Re: [patch 3/3] x86/fpu/xsave: Optimize XSAVEC/S when XGETBV1 is
 supported

On Tue, Apr 19 2022 at 15:43, Thomas Gleixner wrote:
> On Thu, Apr 14 2022 at 10:24, Dave Hansen wrote:
>> On 4/4/22 05:11, Thomas Gleixner wrote:
>>> which is suboptimal. Prefetch works better when the access is linear. But
>>> what's worse is that PKRU can be located in a different page which
>>> obviously affects dTLB.
>>
>> The numbers don't lie, but I'm still surprised by this.  Was this in a
>> VM that isn't backed with large pages?  task_struct.thread.fpu is
>> kmem_cache_alloc()'d and is in the direct map, which should be 2M/1G
>> pages almost all the time.
>
> Hmm. Indeed, that's weird.
>
> That was bare metal and I just checked that this was a production config
> and not some weird debug muck which breaks large pages. I'll look deeper
> into that.

I can't find any reasonable explanation. The pages are definitely large
pages, so yes the dTLB miss count does not make sense, but it's
consistently faster and it's always the dTLB miss count which makes the
big difference according to perf.

For enhanced fun, I ran the lot on a AMD Zen3 machine and with the same
test case (hackbench -l 10000) repeated 10 times by perf stat this is
consistently slower than the non optimized variant. There is at least an
explanation for that. A tight loop of 1 Mio xgetbv(1) invocations takes
9 Mio cycles on a SKL-X and 50 Mio cycles on a AMD Zen3.

XSAVE is wonderful, isn't it?

Thanks,

        tglx

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ