lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sun, 29 Oct 2017 10:08:17 +0100 (CET)
From:   Thomas Gleixner <tglx@...utronix.de>
To:     Andy Lutomirski <luto@...nel.org>
cc:     Linus Torvalds <torvalds@...ux-foundation.org>,
        Peter Zijlstra <peterz@...radead.org>, X86 ML <x86@...nel.org>,
        Borislav Petkov <bp@...en8.de>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        linux-abi@...r.kernel.org
Subject: Re: Can we break RDPID/RDTSCP ABI ASAP and see if it's okay?

On Sat, 28 Oct 2017, Andy Lutomirski wrote:

> We currently do this on boot:
> 
> write_rdtscp_aux((node << 12) | cpu);
> 
> This *sucks*.  It means that, to very quickly obtain the CPU number
> using RDPID, an ALU op is needed.  It also doesn't bloody work on
> systems with more than 4096 CPUs.
> 
> IMO it should be ((u64)node << 32) | cpu.  Then getting the CPU number is just:

That breaks 32bit

> RDPID %rax
> MOVL %eax, %eax
> 
> I'm thinking about this because rseq users could avoid ever *loading*
> the rseq cacheline if they used RDPID to get the CPU number, and it
> would be nice to give them a sane way to do it.
> 
> This won't break any existing RDPID users if we do it quickly because
> there aren't any (the CPUs aren't available).  I would be a bit
> surprised if anyone uses RDTSCP for this because it's absurdly slow.

What we can do on 64bit is:

     ((u64) cpu << 32) | (node << 12) | (cpu & 0xfff)

That does not solve the ALU op problem but works on both 32 and 64 bit and
on systems with more than 4096 CPUs.

Thanks,

	tglx

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ