lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 25 Apr 2022 15:51:35 +0200
From:   "Jason A. Donenfeld" <Jason@...c4.com>
To:     Thomas Gleixner <tglx@...utronix.de>
Cc:     linux-kernel@...r.kernel.org, linux-crypto@...r.kernel.org,
        arnd@...db.de, Borislav Petkov <bp@...en8.de>, x86@...nel.org
Subject: Re: [PATCH v6 13/17] x86: use fallback for random_get_entropy()
 instead of zero

On Mon, Apr 25, 2022 at 03:41:00PM +0200, Jason A. Donenfeld wrote:
> .text:0000000000001A70 add_interrupt_randomness proc near
> .text:0000000000001A70                 movsxd  rcx, edi
> .text:0000000000001A73                 rdtsc
> .text:0000000000001A75                 shl     rdx, 20h
> .text:0000000000001A79                 mov     rdi, [rsp+0]
> .text:0000000000001A7D                 or      rax, rdx
> .text:0000000000001A80                 mov     rdx, offset irq_randomness
> .text:0000000000001A87                 mov     rsi, gs:__irq_regs
> .text:0000000000001A8F                 add     rdx, gs:this_cpu_off

For context, here's what your suggested change looks like:

.text:0000000000001AF0 add_interrupt_randomness proc near
.text:0000000000001AF0                 push    rbx
.text:0000000000001AF1                 movsxd  rbx, edi
.text:0000000000001AF4                 jmp     loc_25AA
.text:0000000000001AF9 loc_1AF9:                               ; CODE XREF: add_interrupt_randomness+AC1↓j
.text:0000000000001AF9                 rdtsc
.text:0000000000001AFB                 shl     rdx, 20h
.text:0000000000001AFF                 or      rax, rdx
.text:0000000000001B02
.text:0000000000001B02 loc_1B02:                               ; CODE XREF: add_interrupt_randomness+12C↓j
.text:0000000000001B02                 mov     rdx, offset irq_randomness
.text:0000000000001B09                 mov     rcx, gs:__irq_regs
.text:0000000000001B11                 add     rdx, gs:this_cpu_off
[...]
.altinstr_aux:00000000000025AA loc_25AA:                               ; CODE XREF: add_interrupt_randomness+4↑j
.altinstr_aux:00000000000025AA                 test    byte ptr cs:boot_cpu_data+20h, 10h
.altinstr_aux:00000000000025B1                 jnz     loc_1AF9
.altinstr_aux:00000000000025B7                 jmp     loc_1C17

So an extra push, and then that jmp gets nop'd out (I hope). Plus the
altinstr area. I guess that's not awful, but I fail to see what we
actually gain here over the existing code, which doesn't bother messing
with this at runtime when the kernel builder opts out. The ifdef has
always been there; what's the reason for adding it now in 2022?
Virtualization cannot be it.

Jason

Powered by blists - more mailing lists