lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YrAlPKeOowD5qv/B@linutronix.de>
Date:   Mon, 20 Jun 2022 09:43:56 +0200
From:   Sebastian Siewior <bigeasy@...utronix.de>
To:     "Jason A. Donenfeld" <Jason@...c4.com>
Cc:     Jann Horn <jannh@...gle.com>, Theodore Ts'o <tytso@....edu>,
        LKML <linux-kernel@...r.kernel.org>,
        Thomas Gleixner <tglx@...utronix.de>,
        Linus Torvalds <torvalds@...ux-foundation.org>
Subject: Re: [PATCH] random: Fix signal_pending() usage

On 2022-06-18 00:47:12 [+0200], Jason A. Donenfeld wrote:
> Hi Sebastian,
Hi Jason,

> You're a bit late to the thread :). It used to be 256. Now it's page
> size. PAGE_SIZE is also what /dev/zero and others in mem.c use.

Just managed to get to that part of the inbox ;)

> As for your suggestion to drop it entirely: that'd be nice, in that it'd
> add a guarantee that currently doesn't exist. But it can lead to
> somewhat large delays if somebody tries to read 2 gigabytes at a time
> and hits Ctrl+C during it. That seems potentially bad?

So on my x86 box which runs a Debian kernel (based on v5.18.2):

| ~$ dd if=/dev/random of=/dev/null bs=2147483648 count=1
| 0+1 records in
| 0+1 records out
| 2147479552 bytes (2,1 GB, 2,0 GiB) copied, 5,97452 s, 359 MB/s

almost 6 secs. On a smaller box it might take 12s or more. Your
implementation change ensured that it does not block for unpredicted
amount of time. Previously it would block until the random pool is
filled with enough entropy and this could take an unforeseen amount of
time. That read now makes more or less constant progress since it
depends only on CPU time.
Based on that, I don't see a problem dropping that signal check
especially that requests larger than 4KiB are most likely exotic.

> Or that's not bad, which would be quite nice, as I would really love to
> add that guarantee. So if you have an argument that not responding to
> signals for that amount of time is fine, I'd be interested to hear it.

Just my two cents.

> Jason

Sebastian

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ