lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <e654a20c9045487eaacbd256f584ce45@AcuMS.aculab.com>
Date: Sun, 1 Dec 2024 18:12:25 +0000
From: David Laight <David.Laight@...LAB.COM>
To: 'Linus Torvalds' <torvalds@...uxfoundation.org>, "'x86@...nel.org'"
	<x86@...nel.org>, "'linux-kernel@...r.kernel.org'"
	<linux-kernel@...r.kernel.org>, 'Thomas Gleixner' <tglx@...utronix.de>,
	"'Ingo Molnar'" <mingo@...hat.com>, 'Dave Hansen'
	<dave.hansen@...ux.intel.com>
CC: 'Andrew Cooper' <andrew.cooper3@...rix.com>, 'Josh Poimboeuf'
	<jpoimboe@...nel.org>, "'bp@...en8.de'" <bp@...en8.de>
Subject: [PATCH next] x86: mask_user_address() return base of guard page for
 kernel addresses

mask_user_address() currently return ~0 for kernel addresses.
This is fine for avoiding speculative reads and get_user()
but is problematic for the code pattern:
		if (can_do_masked_user_access())
			from = masked_user_access_begin(from);
		else if (!user_read_access_begin(from, sizeof(*from)))
			return -EFAULT;
		unsafe_get_user(to->p, &from->p, Efault);
		unsafe_get_user(to->size, &from->size, Efault);
		user_read_access_end();
because of the requirement that the base address be accessed first.

Changing mask_user_address() to return the base of the guard page
means that any address within 4k of the tested address will fault
and the order of the structure member access is no longer critical.

The change replaces the 'sbb, or' with a 'cmov' so is also shorter.

Signed-off-by: David Laight <david.laight@...lab.com>
---

I've built and run a kernel with it - so not broken!

Probably ought to be a follow up patch to rename it to bound_user_access()
before there are too many users.

 arch/x86/include/asm/uaccess_64.h | 23 +++++++++++++----------
 1 file changed, 13 insertions(+), 10 deletions(-)

diff --git a/arch/x86/include/asm/uaccess_64.h b/arch/x86/include/asm/uaccess_64.h
index b0a887209400..4cdace8c93b3 100644
--- a/arch/x86/include/asm/uaccess_64.h
+++ b/arch/x86/include/asm/uaccess_64.h
@@ -57,19 +57,22 @@ static inline unsigned long __untagged_addr_remote(struct mm_struct *mm,
 	((__force unsigned long)(x) <= runtime_const_ptr(USER_PTR_MAX))
 
 /*
- * Masking the user address is an alternative to a conditional
- * user_access_begin that can avoid the fencing. This only works
- * for dense accesses starting at the address.
+ * Bound a user address to the base of the guard page.
+ * This can be used to avoid speculative accesses following access_ok(),
+ * or as an alternative to a conditional user_access_begin.
+ * Both without expensive fencing.
+ * Valid provided the accesses are 'reasonably sequnential'
+ * (no jumps of a page size).
  */
 static inline void __user *mask_user_address(const void __user *ptr)
 {
-	unsigned long mask;
-	asm("cmp %1,%0\n\t"
-	    "sbb %0,%0"
-		:"=r" (mask)
-		:"r" (ptr),
-		 "0" (runtime_const_ptr(USER_PTR_MAX)));
-	return (__force void __user *)(mask | (__force unsigned long)ptr);
+	void __user *bounded = (__force void __user *)runtime_const_ptr(USER_PTR_MAX);
+
+	asm("cmp %0,%1\n\t"
+	    "cmovc %1,%0"
+		:"+r" (bounded)
+		:"r" (ptr));
+	return bounded;
 }
 #define masked_user_access_begin(x) ({				\
 	__auto_type __masked_ptr = (x);				\
-- 
2.17.1

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ