lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220423212623.1957011-14-Jason@zx2c4.com>
Date:   Sat, 23 Apr 2022 23:26:19 +0200
From:   "Jason A. Donenfeld" <Jason@...c4.com>
To:     linux-kernel@...r.kernel.org, linux-crypto@...r.kernel.org,
        tglx@...utronix.de, arnd@...db.de
Cc:     "Jason A. Donenfeld" <Jason@...c4.com>,
        Borislav Petkov <bp@...en8.de>, x86@...nel.org
Subject: [PATCH v6 13/17] x86: use fallback for random_get_entropy() instead of zero

In the event that random_get_entropy() can't access a cycle counter or
similar, falling back to returning 0 is really not the best we can do.
Instead, at least calling random_get_entropy_fallback() would be
preferable, because that always needs to return _something_, even
falling back to jiffies eventually. It's not as though
random_get_entropy_fallback() is super high precision or guaranteed to
be entropic, but basically anything that's not zero all the time is
better than returning zero all the time.

If CONFIG_X86_TSC=n, then it's possible that we're running on a 486 with
no RDTSC, so we only need the fallback code for that case. While we're
at it, fix up both the new function and the get_cycles() function from
which its derived to use cpu_feature_enabled() rather than boot_cpu_has().

Cc: Thomas Gleixner <tglx@...utronix.de>
Cc: Arnd Bergmann <arnd@...db.de>
Cc: Borislav Petkov <bp@...en8.de>
Cc: x86@...nel.org
Signed-off-by: Jason A. Donenfeld <Jason@...c4.com>
---
 arch/x86/include/asm/timex.h | 10 ++++++++++
 arch/x86/include/asm/tsc.h   |  4 ++--
 2 files changed, 12 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/timex.h b/arch/x86/include/asm/timex.h
index a4a8b1b16c0c..2b841c39d876 100644
--- a/arch/x86/include/asm/timex.h
+++ b/arch/x86/include/asm/timex.h
@@ -5,6 +5,16 @@
 #include <asm/processor.h>
 #include <asm/tsc.h>
 
+static inline unsigned long random_get_entropy(void)
+{
+#ifndef CONFIG_X86_TSC
+	if (!cpu_feature_enabled(X86_FEATURE_TSC))
+		return random_get_entropy_fallback();
+#endif
+	return rdtsc();
+}
+#define random_get_entropy random_get_entropy
+
 /* Assume we use the PIT time source for the clock tick */
 #define CLOCK_TICK_RATE		PIT_TICK_RATE
 
diff --git a/arch/x86/include/asm/tsc.h b/arch/x86/include/asm/tsc.h
index 01a300a9700b..192ea48ec80b 100644
--- a/arch/x86/include/asm/tsc.h
+++ b/arch/x86/include/asm/tsc.h
@@ -21,12 +21,12 @@ extern void disable_TSC(void);
 static inline cycles_t get_cycles(void)
 {
 #ifndef CONFIG_X86_TSC
-	if (!boot_cpu_has(X86_FEATURE_TSC))
+	if (!cpu_feature_enabled(X86_FEATURE_TSC))
 		return 0;
 #endif
-
 	return rdtsc();
 }
+#define get_cycles get_cycles
 
 extern struct system_counterval_t convert_art_to_tsc(u64 art);
 extern struct system_counterval_t convert_art_ns_to_tsc(u64 art_ns);
-- 
2.35.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ