lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20241203023159.219355-1-zuoze1@huawei.com>
Date: Tue, 3 Dec 2024 10:31:59 +0800
From: Ze Zuo <zuoze1@...wei.com>
To: <gustavoars@...nel.org>, <akpm@...ux-foundation.org>
CC: <linux-hardening@...r.kernel.org>, <linux-mm@...ck.org>,
	<willy@...radead.org>, <keescook@...omium.org>, <urezki@...il.com>,
	<zuoze1@...wei.com>, <wangkefeng.wang@...wei.com>
Subject: [PATCH -next] mm: usercopy: add a debugfs interface to bypass the vmalloc check.

The commit 0aef499f3172 ("mm/usercopy: Detect vmalloc overruns") introduced
vmalloc check for usercopy. However, in subsystems like networking, when
memory allocated using vmalloc or vmap is subsequently copied using
functions like copy_to_iter/copy_from_iter, the check is triggered. This
adds overhead in the copy path, such as the cost of searching the
red-black tree, which increases the performance burden.

We found that after merging this patch, network bandwidth performance in
the XDP scenario significantly dropped from 25 Gbits/sec to 8 Gbits/sec,
the hardened_usercopy is enabled by default.

To address this, we introduced a debugfs interface that allows selectively
enabling or disabling the vmalloc check based on the use case, optimizing
performance.

By default, vmalloc check for usercopy is enabled.

To disable the vmalloc check:
        echo Y > /sys/kernel/debug/bypass_usercopy_vmalloc_check

After executing the above command, the XDP performance returns to 25
Gbits/sec.

Signed-off-by: Ze Zuo <zuoze1@...wei.com>
---
 mm/usercopy.c | 13 ++++++++++++-
 1 file changed, 12 insertions(+), 1 deletion(-)

diff --git a/mm/usercopy.c b/mm/usercopy.c
index 83c164aba6e0..ef1eb23b2273 100644
--- a/mm/usercopy.c
+++ b/mm/usercopy.c
@@ -21,6 +21,7 @@
 #include <linux/vmalloc.h>
 #include <linux/atomic.h>
 #include <linux/jump_label.h>
+#include <linux/debugfs.h>
 #include <asm/sections.h>
 #include "slab.h"
 
@@ -159,6 +160,8 @@ static inline void check_bogus_address(const unsigned long ptr, unsigned long n,
 		usercopy_abort("null address", NULL, to_user, ptr, n);
 }
 
+static bool bypass_vmalloc_check __read_mostly;
+
 static inline void check_heap_object(const void *ptr, unsigned long n,
 				     bool to_user)
 {
@@ -174,8 +177,13 @@ static inline void check_heap_object(const void *ptr, unsigned long n,
 	}
 
 	if (is_vmalloc_addr(ptr) && !pagefault_disabled()) {
-		struct vmap_area *area = find_vmap_area(addr);
+		struct vmap_area *area;
+
+		 /* Bypass it since searching the kernel VM area is slow */
+		if (bypass_vmalloc_check)
+			return;
 
+		area = find_vmap_area(addr);
 		if (!area)
 			usercopy_abort("vmalloc", "no area", to_user, 0, n);
 
@@ -271,6 +279,9 @@ static int __init set_hardened_usercopy(void)
 {
 	if (enable_checks == false)
 		static_branch_enable(&bypass_usercopy_checks);
+	else
+		debugfs_create_bool("bypass_usercopy_vmalloc_check", 0600,
+				    NULL, &bypass_vmalloc_check);
 	return 1;
 }
 
-- 
2.25.1


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ