[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7tpdpvjdcfcujdlkartvbx5m3ngqanwa5brclxnytsrzcvqc2a@n2mnvjtmpzuv>
Date: Thu, 4 Dec 2025 16:19:41 +0100
From: Maciej Wieczor-Retman <maciej.wieczor-retman@...el.com>
To: Jiayuan Chen <jiayuan.chen@...ux.dev>
CC: Maciej Wieczor-Retman <m.wieczorretman@...me>, <linux-mm@...ck.org>,
<syzbot+997752115a851cb0cf36@...kaller.appspotmail.com>, Andrey Ryabinin
<ryabinin.a.a@...il.com>, Alexander Potapenko <glider@...gle.com>, "Andrey
Konovalov" <andreyknvl@...il.com>, Dmitry Vyukov <dvyukov@...gle.com>,
Vincenzo Frascino <vincenzo.frascino@....com>, Andrew Morton
<akpm@...ux-foundation.org>, Uladzislau Rezki <urezki@...il.com>, "Danilo
Krummrich" <dakr@...nel.org>, Kees Cook <kees@...nel.org>,
<kasan-dev@...glegroups.com>, <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v1] mm/kasan: Fix incorrect unpoisoning in vrealloc for
KASAN
On 2025-12-04 at 14:38:12 +0000, Jiayuan Chen wrote:
>December 4, 2025 at 21:55, "Maciej Wieczor-Retman" <m.wieczorretman@...me mailto:m.wieczorretman@...me?to=%22Maciej%20Wieczor-Retman%22%20%3Cm.wieczorretman%40pm.me%3E > wrote:
>
>
>>
>> On 2025-12-03 at 02:05:11 +0000, Jiayuan Chen wrote:
>>
>> >
>> > December 3, 2025 at 04:48, "Maciej Wieczor-Retman" <maciej.wieczor-retman@...el.com mailto:maciej.wieczor-retman@...el.com?to=%22Maciej%20Wieczor-Retman%22%20%3Cmaciej.wieczor-retman%40intel.com%3E > wrote:
>> >
>> > >
>> > > Hi, I'm working on [1]. As Andrew pointed out to me the patches are quite
>> > > similar. I was wondering if you mind if the reuse_tag was an actual tag value?
>> > > Instead of just bool toggling the usage of kasan_random_tag()?
>> > >
>> > > I tested the problem I'm seeing, with your patch and the tags end up being reset.
>> > > That's because the vms[area] pointers that I want to unpoison don't have a tag
>> > > set, but generating a different random tag for each vms[] pointer crashes the
>> > > kernel down the line. So __kasan_unpoison_vmalloc() needs to be called on each
>> > > one but with the same tag.
>> > >
>> > > Arguably I noticed my series also just resets the tags right now, but I'm
>> > > working to correct it at the moment. I can send a fixed version tomorrow. Just
>> > > wanted to ask if having __kasan_unpoison_vmalloc() set an actual predefined tag
>> > > is a problem from your point of view?
>> > >
>> > > [1] https://lore.kernel.org/all/cover.1764685296.git.m.wieczorretman@pm.me/
>> > >
>> > Hi Maciej,
>> >
>> > It seems we're focusing on different issues, but feel free to reuse or modify the 'reuse_tag'.
>> > It's intended to preserve the tag in one 'vma'.
>> >
>> > I'd also be happy to help reproduce and test your changes to ensure the issue I encountered
>> > isn't regressed once you send a patch based on mine.
>> >
>> > Thanks.
>> >
>> After reading Andrey's comments on your patches and mine I tried applying all
>> the changes to test the flag approach. Now my patches don't modify any vrealloc
>> related code. I came up with something like this below from your patch. Just
>> tested it and it works fine on my end, does it look okay to you?
>>
...
Thanks for letting me know, glad it's working :)
In that case I'll go ahead and post my two patches with the vmalloc flag
addition. And thanks for pasting your code here, I suppose mine won't conflict
with yours but I'll check before sending.
kind regards
Maciej Wieczór-Retman
>I think I don't need KEEP_TAG flag anymore, following patch works well and all kasan tests run successfully
>with CONFIG_KASAN_SW_TAGS/CONFIG_KASAN_HW_TAGS/CONFIG_KASAN_GENERIC
>
>
>diff --git a/mm/kasan/hw_tags.c b/mm/kasan/hw_tags.c
>index 1c373cc4b3fa..8b819a9b2a27 100644
>--- a/mm/kasan/hw_tags.c
>+++ b/mm/kasan/hw_tags.c
>@@ -394,6 +394,11 @@ void __kasan_poison_vmalloc(const void *start, unsigned long size)
> * The physical pages backing the vmalloc() allocation are poisoned
> * through the usual page_alloc paths.
> */
>+ if (!is_vmalloc_or_module_addr(start))
>+ return;
>+
>+ size = round_up(size, KASAN_GRANULE_SIZE);
>+ kasan_poison(start, size, KASAN_VMALLOC_INVALID, false);
> }
>
> #endif
>diff --git a/mm/kasan/kasan_test_c.c b/mm/kasan/kasan_test_c.c
>index 2cafca31b092..a5f683c3abde 100644
>--- a/mm/kasan/kasan_test_c.c
>+++ b/mm/kasan/kasan_test_c.c
>@@ -1840,6 +1840,84 @@ static void vmalloc_helpers_tags(struct kunit *test)
> vfree(ptr);
> }
>
>+
>+static void vrealloc_helpers(struct kunit *test, bool tags)
>+{
>+ char *ptr;
>+ size_t size = PAGE_SIZE / 2 - KASAN_GRANULE_SIZE - 5;
>+
>+ if (!kasan_vmalloc_enabled())
>+ kunit_skip(test, "Test requires kasan.vmalloc=on");
>+
>+ ptr = (char *)vmalloc(size);
>+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr);
>+
>+ OPTIMIZER_HIDE_VAR(ptr);
>+
>+ size += PAGE_SIZE / 2;
>+ ptr = vrealloc(ptr, size, GFP_KERNEL);
>+ /* Check that the returned pointer is tagged. */
>+ if (tags) {
>+ KUNIT_EXPECT_GE(test, (u8)get_tag(ptr), (u8)KASAN_TAG_MIN);
>+ KUNIT_EXPECT_LT(test, (u8)get_tag(ptr), (u8)KASAN_TAG_KERNEL);
>+ }
>+ /* Make sure in-bounds accesses are valid. */
>+ ptr[0] = 0;
>+ ptr[size - 1] = 0;
>+
>+ /* Make sure exported vmalloc helpers handle tagged pointers. */
>+ KUNIT_ASSERT_TRUE(test, is_vmalloc_addr(ptr));
>+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, vmalloc_to_page(ptr));
>+
>+ size -= PAGE_SIZE / 2;
>+ ptr = vrealloc(ptr, size, GFP_KERNEL);
>+
>+ /* Check that the returned pointer is tagged. */
>+ KUNIT_EXPECT_GE(test, (u8)get_tag(ptr), (u8)KASAN_TAG_MIN);
>+ KUNIT_EXPECT_LT(test, (u8)get_tag(ptr), (u8)KASAN_TAG_KERNEL);
>+
>+ /* Make sure exported vmalloc helpers handle tagged pointers. */
>+ KUNIT_ASSERT_TRUE(test, is_vmalloc_addr(ptr));
>+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, vmalloc_to_page(ptr));
>+
>+
>+ /* This access must cause a KASAN report. */
>+ KUNIT_EXPECT_KASAN_FAIL_READ(test, ((volatile char *)ptr)[size + 5]);
>+
>+
>+#if !IS_MODULE(CONFIG_KASAN_KUNIT_TEST)
>+ {
>+ int rv;
>+
>+ /* Make sure vrealloc'ed memory permissions can be changed. */
>+ rv = set_memory_ro((unsigned long)ptr, 1);
>+ KUNIT_ASSERT_GE(test, rv, 0);
>+ rv = set_memory_rw((unsigned long)ptr, 1);
>+ KUNIT_ASSERT_GE(test, rv, 0);
>+ }
>+#endif
>+
>+ vfree(ptr);
>+}
>+
>+static void vrealloc_helpers_tags(struct kunit *test)
>+{
>+ /* This test is intended for tag-based modes. */
>+ KASAN_TEST_NEEDS_CONFIG_OFF(test, CONFIG_KASAN_GENERIC);
>+
>+ KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_KASAN_VMALLOC);
>+ vrealloc_helpers(test, true);
>+}
>+
>+static void vrealloc_helpers_generic(struct kunit *test)
>+{
>+ /* This test is intended for tag-based modes. */
>+ KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_KASAN_GENERIC);
>+
>+ KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_KASAN_VMALLOC);
>+ vrealloc_helpers(test, false);
>+}
>+
> static void vmalloc_oob(struct kunit *test)
> {
> char *v_ptr, *p_ptr;
>@@ -2241,6 +2319,8 @@ static struct kunit_case kasan_kunit_test_cases[] = {
> KUNIT_CASE_SLOW(kasan_atomics),
> KUNIT_CASE(vmalloc_helpers_tags),
> KUNIT_CASE(vmalloc_oob),
>+ KUNIT_CASE(vrealloc_helpers_tags),
>+ KUNIT_CASE(vrealloc_helpers_generic),
> KUNIT_CASE(vmap_tags),
> KUNIT_CASE(vm_map_ram_tags),
> KUNIT_CASE(match_all_not_assigned),
>diff --git a/mm/vmalloc.c b/mm/vmalloc.c
>index 798b2ed21e46..9ba2e8a346d6 100644
>--- a/mm/vmalloc.c
>+++ b/mm/vmalloc.c
>@@ -4128,6 +4128,7 @@ EXPORT_SYMBOL(vzalloc_node_noprof);
> void *vrealloc_node_align_noprof(const void *p, size_t size, unsigned long align,
> gfp_t flags, int nid)
> {
>+ asan_vmalloc_flags_t flags;
> struct vm_struct *vm = NULL;
> size_t alloced_size = 0;
> size_t old_size = 0;
>@@ -4158,25 +4159,26 @@ void *vrealloc_node_align_noprof(const void *p, size_t size, unsigned long align
> goto need_realloc;
> }
>
>+ flags = KASAN_VMALLOC_PROT_NORMAL | KASAN_VMALLOC_VM_ALLOC;
> /*
> * TODO: Shrink the vm_area, i.e. unmap and free unused pages. What
> * would be a good heuristic for when to shrink the vm_area?
> */
>- if (size <= old_size) {
>+ if (p && size <= old_size) {
> /* Zero out "freed" memory, potentially for future realloc. */
> if (want_init_on_free() || want_init_on_alloc(flags))
> memset((void *)p + size, 0, old_size - size);
> vm->requested_size = size;
>- kasan_poison_vmalloc(p + size, old_size - size);
>+ kasan_poison_vmalloc(p, alloced_size);
>+ p = kasan_unpoison_vmalloc(p, size, flags);
> return (void *)p;
> }
>
> /*
> * We already have the bytes available in the allocation; use them.
> */
>- if (size <= alloced_size) {
>- kasan_unpoison_vmalloc(p + old_size, size - old_size,
>- KASAN_VMALLOC_PROT_NORMAL);
>+ if (p && size <= alloced_size) {
>+ p = kasan_unpoison_vmalloc(p, size, flags);
> /*
> * No need to zero memory here, as unused memory will have
> * already been zeroed at initial allocation time or during
Powered by blists - more mailing lists