[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20230114005408.never.756-kees@kernel.org>
Date: Fri, 13 Jan 2023 16:54:12 -0800
From: Kees Cook <keescook@...omium.org>
To: Andrew Morton <akpm@...ux-foundation.org>,
Guenter Roeck <linux@...ck-us.net>
Cc: Kees Cook <keescook@...omium.org>,
Nick Desaulniers <ndesaulniers@...gle.com>,
David Gow <davidgow@...gle.com>,
Nathan Chancellor <nathan@...nel.org>,
linux-hardening@...r.kernel.org, Vlastimil Babka <vbabka@...e.cz>,
Daniel Latypov <dlatypov@...gle.com>,
Josh Poimboeuf <jpoimboe@...nel.org>,
Geert Uytterhoeven <geert+renesas@...der.be>,
Miguel Ojeda <ojeda@...nel.org>,
Isabella Basso <isabbasso@...eup.net>,
Dan Williams <dan.j.williams@...el.com>,
Rasmus Villemoes <linux@...musvillemoes.dk>,
linux-kernel@...r.kernel.org
Subject: [PATCH] kunit: memcpy: Split slow memcpy tests into MEMCPY_SLOW_KUNIT_TEST
Since the long memcpy tests may stall a system for tens of seconds
in virtualized architecture environments, split those tests off under
CONFIG_MEMCPY_SLOW_KUNIT_TEST so they can be separately disabled.
Reported-by: Guenter Roeck <linux@...ck-us.net>
Link: https://lore.kernel.org/lkml/20221226195206.GA2626419@roeck-us.net
Reviewed-by: Nick Desaulniers <ndesaulniers@...gle.com>
Reviewed-and-tested-by: Guenter Roeck <linux@...ck-us.net>
Reviewed-by: David Gow <davidgow@...gle.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>
Cc: Nathan Chancellor <nathan@...nel.org>
Cc: linux-hardening@...r.kernel.org
Signed-off-by: Kees Cook <keescook@...omium.org>
---
v2: fix tristate to bool
v1: https://lore.kernel.org/lkml/20230107040203.never.112-kees@kernel.org
---
lib/Kconfig.debug | 9 +++++++++
lib/memcpy_kunit.c | 15 ++++++++++++---
2 files changed, 21 insertions(+), 3 deletions(-)
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index c2c78d0e761c..f90637171453 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -2621,6 +2621,15 @@ config MEMCPY_KUNIT_TEST
If unsure, say N.
+config MEMCPY_SLOW_KUNIT_TEST
+ bool "Include exhaustive memcpy tests" if !KUNIT_ALL_TESTS
+ depends on MEMCPY_KUNIT_TEST
+ default KUNIT_ALL_TESTS
+ help
+ Some memcpy tests are quite exhaustive in checking for overlaps
+ and bit ranges. These can be very slow, so they are split out
+ as a separate config.
+
config IS_SIGNED_TYPE_KUNIT_TEST
tristate "Test is_signed_type() macro" if !KUNIT_ALL_TESTS
depends on KUNIT
diff --git a/lib/memcpy_kunit.c b/lib/memcpy_kunit.c
index 89128551448d..5a545e1b5dbb 100644
--- a/lib/memcpy_kunit.c
+++ b/lib/memcpy_kunit.c
@@ -307,8 +307,12 @@ static void set_random_nonzero(struct kunit *test, u8 *byte)
}
}
-static void init_large(struct kunit *test)
+static int init_large(struct kunit *test)
{
+ if (!IS_ENABLED(CONFIG_MEMCPY_SLOW_KUNIT_TEST)) {
+ kunit_skip(test, "Slow test skipped. Enable with CONFIG_MEMCPY_SLOW_KUNIT_TEST=y");
+ return -EBUSY;
+ }
/* Get many bit patterns. */
get_random_bytes(large_src, ARRAY_SIZE(large_src));
@@ -319,6 +323,8 @@ static void init_large(struct kunit *test)
/* Explicitly zero the entire destination. */
memset(large_dst, 0, ARRAY_SIZE(large_dst));
+
+ return 0;
}
/*
@@ -327,7 +333,9 @@ static void init_large(struct kunit *test)
*/
static void copy_large_test(struct kunit *test, bool use_memmove)
{
- init_large(test);
+
+ if (init_large(test))
+ return;
/* Copy a growing number of non-overlapping bytes ... */
for (int bytes = 1; bytes <= ARRAY_SIZE(large_src); bytes++) {
@@ -472,7 +480,8 @@ static void memmove_overlap_test(struct kunit *test)
static const int bytes_start = 1;
static const int bytes_end = ARRAY_SIZE(large_src) + 1;
- init_large(test);
+ if (init_large(test))
+ return;
/* Copy a growing number of overlapping bytes ... */
for (int bytes = bytes_start; bytes < bytes_end;
--
2.34.1
Powered by blists - more mailing lists