lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Message-ID: <20250910161820.247f526a@gandalf.local.home>
Date: Wed, 10 Sep 2025 16:18:20 -0400
From: Steven Rostedt <rostedt@...dmis.org>
To: LKML <linux-kernel@...r.kernel.org>
Cc: Linux Trace Kernel <linux-trace-kernel@...r.kernel.org>, Linus Torvalds
 <torvalds@...ux-foundation.org>, linux-mm@...ck.org, Kees Cook
 <keescook@...omium.org>, Aleksa Sarai <cyphar@...har.com>, Al Viro
 <viro@...IV.linux.org.uk>
Subject: [PATCH] uaccess: Comment that copy to/from inatomic requires page
 fault disabled

From: Steven Rostedt <rostedt@...dmis.org>

The functions __copy_from_user_inatomic() and __copy_to_user_inatomic()
both require that either the user space memory is pinned, or that page
faults are disabled when they are called. If page faults are not disabled,
and the memory is not present, the fault handling of reading or writing to
that memory may cause the kernel to schedule. That would be bad in an
atomic context.

Link: https://lore.kernel.org/all/20250819105152.2766363-1-luogengkun@huaweicloud.com/

Signed-off-by: Steven Rostedt (Google) <rostedt@...dmis.org>
---
 include/linux/uaccess.h | 9 ++++++++-
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h
index 1beb5b395d81..add99fa9b656 100644
--- a/include/linux/uaccess.h
+++ b/include/linux/uaccess.h
@@ -86,6 +86,12 @@
  * as usual) and both source and destination can trigger faults.
  */
 
+/*
+ * __copy_from_user_inatomic() is safe to use in an atomic context but
+ * the user space memory must either be pinned in memory, or page faults
+ * must be disabled, otherwise the page fault handling may cause the function
+ * to schedule.
+ */
 static __always_inline __must_check unsigned long
 __copy_from_user_inatomic(void *to, const void __user *from, unsigned long n)
 {
@@ -124,7 +130,8 @@ __copy_from_user(void *to, const void __user *from, unsigned long n)
  * Copy data from kernel space to user space.  Caller must check
  * the specified block with access_ok() before calling this function.
  * The caller should also make sure he pins the user space address
- * so that we don't result in page fault and sleep.
+ * or call page_fault_disable() so that we don't result in a page fault
+ * and sleep.
  */
 static __always_inline __must_check unsigned long
 __copy_to_user_inatomic(void __user *to, const void *from, unsigned long n)
-- 
2.50.1


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ