[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CA+fCnZeHdUiQ-k=Cy4bY-DKa7pFow6GfkTsCa2rsYTJNSXYGhw@mail.gmail.com>
Date: Thu, 15 Jan 2026 04:56:18 +0100
From: Andrey Konovalov <andreyknvl@...il.com>
To: Andrey Ryabinin <ryabinin.a.a@...il.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>, Maciej Żenczykowski <maze@...gle.com>,
Maciej Wieczor-Retman <m.wieczorretman@...me>, Alexander Potapenko <glider@...gle.com>,
Dmitry Vyukov <dvyukov@...gle.com>, Vincenzo Frascino <vincenzo.frascino@....com>,
kasan-dev@...glegroups.com, Uladzislau Rezki <urezki@...il.com>, linux-kernel@...r.kernel.org,
linux-mm@...ck.org
Subject: Re: [PATCH 2/2] mm/kasan/kunit: extend vmalloc OOB tests to cover vrealloc()
On Tue, Jan 13, 2026 at 8:16 PM Andrey Ryabinin <ryabinin.a.a@...il.com> wrote:
>
> Extend the vmalloc_oob() test to validate OOB detection after
> resizing vmalloc allocations with vrealloc().
>
> The test now verifies that KASAN correctly poisons and unpoisons vmalloc
> memory when allocations are shrunk and expanded, ensuring OOB accesses
> are reliably detected after each resize.
>
> Signed-off-by: Andrey Ryabinin <ryabinin.a.a@...il.com>
> ---
> mm/kasan/kasan_test_c.c | 50 ++++++++++++++++++++++++++++-------------
> 1 file changed, 35 insertions(+), 15 deletions(-)
>
> diff --git a/mm/kasan/kasan_test_c.c b/mm/kasan/kasan_test_c.c
> index 2cafca31b092..cc8fc479e13a 100644
> --- a/mm/kasan/kasan_test_c.c
> +++ b/mm/kasan/kasan_test_c.c
> @@ -1840,6 +1840,29 @@ static void vmalloc_helpers_tags(struct kunit *test)
> vfree(ptr);
> }
>
> +static void vmalloc_oob_helper(struct kunit *test, char *v_ptr, size_t size)
> +{
> + /*
> + * We have to be careful not to hit the guard page in vmalloc tests.
> + * The MMU will catch that and crash us.
> + */
> +
> + /* Make sure in-bounds accesses are valid. */
> + v_ptr[0] = 0;
> + v_ptr[size - 1] = 0;
> +
> + /*
> + * An unaligned access past the requested vmalloc size.
> + * Only generic KASAN can precisely detect these.
> + */
> + if (IS_ENABLED(CONFIG_KASAN_GENERIC))
> + KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)v_ptr)[size]);
> +
> + /* An aligned access into the first out-of-bounds granule. */
> + size = round_up(size, KASAN_GRANULE_SIZE);
> + KUNIT_EXPECT_KASAN_FAIL_READ(test, ((volatile char *)v_ptr)[size]);
> +}
> +
> static void vmalloc_oob(struct kunit *test)
> {
> char *v_ptr, *p_ptr;
> @@ -1856,24 +1879,21 @@ static void vmalloc_oob(struct kunit *test)
>
> OPTIMIZER_HIDE_VAR(v_ptr);
>
> - /*
> - * We have to be careful not to hit the guard page in vmalloc tests.
> - * The MMU will catch that and crash us.
> - */
> + vmalloc_oob_helper(test, v_ptr, size);
>
> - /* Make sure in-bounds accesses are valid. */
> - v_ptr[0] = 0;
> - v_ptr[size - 1] = 0;
> + size--;
Could do size -= KASAN_GRANULE_SIZE + 1: I think this would allow to
also check whole-granule poisoning/unpoisoning logic for tag-based
modes.
> + v_ptr = vrealloc(v_ptr, size, GFP_KERNEL);
> + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, v_ptr);
>
> - /*
> - * An unaligned access past the requested vmalloc size.
> - * Only generic KASAN can precisely detect these.
> - */
> - if (IS_ENABLED(CONFIG_KASAN_GENERIC))
> - KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)v_ptr)[size]);
> + OPTIMIZER_HIDE_VAR(v_ptr);
>
> - /* An aligned access into the first out-of-bounds granule. */
> - KUNIT_EXPECT_KASAN_FAIL_READ(test, ((volatile char *)v_ptr)[size + 5]);
> + vmalloc_oob_helper(test, v_ptr, size);
> +
> + size += 2;
And then e.g. size += 2 * KASAN_GRANULE_SIZE + 2 here.
> + v_ptr = vrealloc(v_ptr, size, GFP_KERNEL);
> + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, v_ptr);
> +
> + vmalloc_oob_helper(test, v_ptr, size);
>
> /* Check that in-bounds accesses to the physical page are valid. */
> page = vmalloc_to_page(v_ptr);
> --
> 2.52.0
>
Reviewed-by: Andrey Konovalov <andreyknvl@...il.com>
Powered by blists - more mailing lists