lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 15 May 2018 10:05:58 -0400
From:   Steven Rostedt <rostedt@...dmis.org>
To:     LKML <linux-kernel@...r.kernel.org>
Cc:     Linus Torvalds <torvalds@...ux-foundation.org>,
        Peter Zijlstra <peterz@...radead.org>,
        Kees Cook <keescook@...omium.org>,
        Andrew Morton <akpm@...ux-foundation.org>,
        "Tobin C. Harding" <me@...in.cc>
Subject: [PATCH] vsprintf: Fix memory barriers of ptr_key to
 have_filed_random_ptr_key


From: Steven Rostedt (VMware) <rostedt@...dmis.org>

Reviewing Tobin's patches for getting pointers out early before
entropy has been established, I noticed that there's a lone smp_mb() in
the code. As with most lone memory barriers, this one appears to be
incorrectly used.

We currently basically have this:

	get_random_bytes(&ptr_key, sizeof(ptr_key));
	/*
	 * have_filled_random_ptr_key==true is dependent on get_random_bytes().
	 * ptr_to_id() needs to see have_filled_random_ptr_key==true
	 * after get_random_bytes() returns.
	 */
	smp_mb();
	WRITE_ONCE(have_filled_random_ptr_key, true);

And later we have:

	if (unlikely(!have_filled_random_ptr_key))
		return string(buf, end, "(ptrval)", spec);

/* Missing memory barrier here. */

	hashval = (unsigned long)siphash_1u64((u64)ptr, &ptr_key);

As the CPU can perform speculative loads, we could have a situation
with the following:

	CPU0				CPU1
	----				----
				   load ptr_key = 0
   store ptr_key = random
   smp_mb()
   store have_filled_random_ptr_key

				   load have_filled_random_ptr_key = true

				    BAD BAD BAD!

Because nothing prevents CPU1 from loading ptr_key before loading
have_filled_random_ptr_key.

Note, I also do not see the reason to use smp_mb() instead of smp_wmb()
since we are only worried about the store of ptr_key with respect to
the store of have_filled_random_ptr_key.

Cc: stable@...r.kernel.org
Fixes: ad67b74d2469d ("printk: hash addresses printed with %p")
Signed-off-by: Steven Rostedt (VMware) <rostedt@...dmis.org>
---
diff --git a/lib/vsprintf.c b/lib/vsprintf.c
index 30c0cb8cc9bc..e8a0b8e54bd3 100644
--- a/lib/vsprintf.c
+++ b/lib/vsprintf.c
@@ -1680,7 +1680,7 @@ static void fill_random_ptr_key(struct random_ready_callback *unused)
 	 * ptr_to_id() needs to see have_filled_random_ptr_key==true
 	 * after get_random_bytes() returns.
 	 */
-	smp_mb();
+	smp_wmb();
 	WRITE_ONCE(have_filled_random_ptr_key, true);
 }
 
@@ -1715,6 +1715,9 @@ static char *ptr_to_id(char *buf, char *end, void *ptr, struct printf_spec spec)
 		return string(buf, end, "(ptrval)", spec);
 	}
 
+	/* Read ptr_key after reading have_filled_random_ptr_key */
+	smp_rmb();
+
 #ifdef CONFIG_64BIT
 	hashval = (unsigned long)siphash_1u64((u64)ptr, &ptr_key);
 	/*

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ