lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sun, 31 Mar 2019 17:12:48 +0200
From:   Borislav Petkov <bp@...en8.de>
To:     Paolo Bonzini <pbonzini@...hat.com>
Cc:     LKML <linux-kernel@...r.kernel.org>,
        Nadav Amit <nadav.amit@...il.com>,
        Andy Lutomirski <luto@...capital.net>,
        Peter Zijlstra <peterz@...radead.org>,
        "H. Peter Anvin" <hpa@...or.com>, Ingo Molnar <mingo@...hat.com>,
        Joerg Roedel <joro@...tes.org>, kvm@...r.kernel.org,
        Radim Krčmář <rkrcmar@...hat.com>,
        Thomas Gleixner <tglx@...utronix.de>, x86@...nel.org
Subject: Re: [PATCH 3/5] x86/kvm: Convert some slow-path static_cpu_has()
 callers to boot_cpu_has()

On Sun, Mar 31, 2019 at 04:20:11PM +0200, Paolo Bonzini wrote:
> These are not slow path.

Those functions do a *lot* of stuff like a bunch of MSR reads which are
tens of cycles each at least.

I don't think a RIP-relative MOV and a BT:

        movq    boot_cpu_data+20(%rip), %rax    # MEM[(const long unsigned int *)&boot_cpu_data + 20B], _45
        btq     $59, %rax       #, _45

are at all noticeable.

On latest AMD and Intel uarch those are 2-4 cycles, according to

https://agner.org/optimize/instruction_tables.ods

-- 
Regards/Gruss,
    Boris.

Good mailing practices for 400: avoid top-posting and trim the reply.

Powered by blists - more mailing lists