[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAOUHufZk+3xCqK38CuVdWg_ZiWaLyke+Y+=CYJpraET6nKQ=yQ@mail.gmail.com>
Date: Sat, 11 Jun 2022 14:11:45 -0600
From: Yu Zhao <yuzhao@...gle.com>
To: Liam Howlett <liam.howlett@...cle.com>
Cc: Qian Cai <quic_qiancai@...cinc.com>,
"maple-tree@...ts.infradead.org" <maple-tree@...ts.infradead.org>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [PATCH v9 28/69] mm/mmap: reorganize munmap to use maple states
On Mon, Jun 6, 2022 at 10:40 AM Qian Cai <quic_qiancai@...cinc.com> wrote:
>
> On Mon, Jun 06, 2022 at 04:19:52PM +0000, Liam Howlett wrote:
> > Does your syscall fuzzer create a reproducer? This looks like arm64
> > and says 5.18.0-next-20220603 again. Was this bisected to the patch
> > above?
>
> This was triggered by running the fuzzer over the weekend.
>
> $ trinity -C 160
>
> No bisection was done. It was only brought up here because the trace
> pointed to do_mas_munmap() which was introduced here.
Liam,
I'm getting a similar crash on arm64 -- the allocator is madvise(),
not mprotect(). Please take a look.
Thanks.
==================================================================
BUG: KASAN: double-free or invalid-free in kmem_cache_free_bulk+0x230/0x3b0
Pointer tag: [0c], memory tag: [fe]
CPU: 2 PID: 8320 Comm: stress-ng Tainted: G B W
5.19.0-rc1-lockdep+ #3
Call trace:
dump_backtrace+0x1a0/0x200
show_stack+0x24/0x30
dump_stack_lvl+0x7c/0xa0
print_report+0x15c/0x524
kasan_report_invalid_free+0x64/0x84
____kasan_slab_free+0x150/0x184
__kasan_slab_free+0x14/0x24
slab_free_freelist_hook+0x100/0x1ac
kmem_cache_free_bulk+0x230/0x3b0
mas_destroy+0x10d8/0x1270
mas_store_prealloc+0xb8/0xec
do_mas_align_munmap+0x398/0x694
do_mas_munmap+0xf8/0x118
__vm_munmap+0x154/0x1e0
__arm64_sys_munmap+0x44/0x54
el0_svc_common+0xfc/0x1cc
do_el0_svc_compat+0x38/0x5c
el0_svc_compat+0x68/0xf4
el0t_32_sync_handler+0xc0/0xf0
el0t_32_sync+0x190/0x194
Allocated by task 8437:
kasan_set_track+0x4c/0x7c
__kasan_slab_alloc+0x84/0xa8
kmem_cache_alloc_bulk+0x300/0x408
mas_alloc_nodes+0x198/0x294
mas_preallocate+0x8c/0x110
__vma_adjust+0x174/0xc88
vma_merge+0x2e4/0x300
do_madvise+0x504/0xd20
__arm64_sys_madvise+0x54/0x64
el0_svc_common+0xfc/0x1cc
do_el0_svc_compat+0x38/0x5c
el0_svc_compat+0x68/0xf4
el0t_32_sync_handler+0xc0/0xf0
el0t_32_sync+0x190/0x194
Freed by task 8320:
kasan_set_track+0x4c/0x7c
kasan_set_free_info+0x2c/0x38
____kasan_slab_free+0x13c/0x184
__kasan_slab_free+0x14/0x24
slab_free_freelist_hook+0x100/0x1ac
kmem_cache_free+0x11c/0x264
mt_destroy_walk+0x6d8/0x714
mas_wmb_replace+0x9d4/0xa68
mas_spanning_rebalance+0x1af0/0x1d2c
mas_wr_spanning_store+0x908/0x964
mas_wr_store_entry+0x53c/0x5c0
mas_store_prealloc+0x88/0xec
do_mas_align_munmap+0x398/0x694
do_mas_munmap+0xf8/0x118
__vm_munmap+0x154/0x1e0
__arm64_sys_munmap+0x44/0x54
el0_svc_common+0xfc/0x1cc
do_el0_svc_compat+0x38/0x5c
el0_svc_compat+0x68/0xf4
el0t_32_sync_handler+0xc0/0xf0
el0t_32_sync+0x190/0x194
The buggy address belongs to the object at ffffff808b5f0a00
which belongs to the cache maple_node of size 256
The buggy address is located 0 bytes inside of
256-byte region [ffffff808b5f0a00, ffffff808b5f0b00)
The buggy address belongs to the physical page:
page:fffffffe022d7c00 refcount:1 mapcount:0 mapping:0000000000000000
index:0xcffff808b5f0a00 pfn:0x10b5f0
head:fffffffe022d7c00 order:2 compound_mapcount:0 compound_pincount:0
flags: 0x8000000000010200(slab|head|zone=2|kasantag=0x0)
raw: 8000000000010200 fffffffe031a8608 fffffffe021a3608 caffff808002c800
raw: 0cffff808b5f0a00 0000000000150013 00000001ffffffff 0000000000000000
page dumped because: kasan: bad access detected
Memory state around the buggy address:
ffffff808b5f0800: fe fe fe fe fe fe fe fe fe fe fe fe fe fe fe fe
ffffff808b5f0900: fe fe fe fe fe fe fe fe fe fe fe fe fe fe fe fe
>ffffff808b5f0a00: fe fe fe fe fe fe fe fe fe fe fe fe fe fe fe fe
^
ffffff808b5f0b00: fe fe fe fe fe fe fe fe fe fe fe fe fe fe fe fe
ffffff808b5f0c00: fe fe fe fe fe fe fe fe fe fe fe fe fe fe fe fe
==================================================================
Powered by blists - more mailing lists