[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20220416151923.ig5zavuptjsufm3d@revolver>
Date: Sat, 16 Apr 2022 15:19:42 +0000
From: Liam Howlett <liam.howlett@...cle.com>
To: Yu Zhao <yuzhao@...gle.com>
CC: Andrew Morton <akpm@...ux-foundation.org>,
"maple-tree@...ts.infradead.org" <maple-tree@...ts.infradead.org>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v7 00/70] Introducing the Maple Tree
* Yu Zhao <yuzhao@...gle.com> [220416 00:10]:
> On Fri, Apr 15, 2022 at 7:03 PM Liam Howlett <liam.howlett@...cle.com> wrote:
> >
> > * Yu Zhao <yuzhao@...gle.com> [220415 03:11]:
> > > On Thu, Apr 14, 2022 at 12:19:11PM -0700, Andrew Morton wrote:
> > > > On Thu, 14 Apr 2022 17:15:26 +0000 Liam Howlett <liam.howlett@...cle.com> wrote:
> > > >
> > > > > > Also I noticed, for the end address to walk_page_range(), Matthew used
> > > > > > -1 and you used ULONG_MAX in the maple branch; Andrew used TASK_SIZE
> > > > > > below. Having a single value throughout would be great.
> > > > >
> > > > > I think ULONG_MAX would be best, we should probably change the below to
> > > > > ULONG_MAX.
> > > >
> > > > I switched it to ULONG_MAX/
> > > >
> > > > > I don't see the code below in mglru-mapletree (62dd11ea8d). Am I on the
> > > > > right branch/commit?
> > > >
> > > > oop, sorry, sleepy guy failed to include all the mglru patches! It
> > > > should be fixed now (4e03b8e70232).
> > >
> > > Hi Liam,
> > >
> > > Mind taking a look? Thanks.
> > >
> > > I used
> > > 1fe4e0d45c05 (HEAD) mm/vmscan: remove obsolete comment in get_scan_count
> > >
> > > On aarch64:
> > > arch/arm64/kernel/elfcore.c:120:2: error: no member named 'mmap' in 'struct mm_struct'
> > > arch/arm64/kernel/elfcore.c:120:2: error: no member named 'vm_next' in 'struct vm_area_struct'
> > > arch/arm64/kernel/elfcore.c:130:2: error: no member named 'mmap' in 'struct mm_struct'
> > > arch/arm64/kernel/elfcore.c:130:2: error: no member named 'vm_next' in 'struct vm_area_struct'
> > > arch/arm64/kernel/elfcore.c:13:23: note: expanded from macro 'for_each_mte_vma'
> > > arch/arm64/kernel/elfcore.c:13:45: note: expanded from macro 'for_each_mte_vma'
> > > arch/arm64/kernel/elfcore.c:85:2: error: no member named 'mmap' in 'struct mm_struct'
> > > arch/arm64/kernel/elfcore.c:85:2: error: no member named 'vm_next' in 'struct vm_area_struct'
> > > arch/arm64/kernel/elfcore.c:95:2: error: no member named 'mmap' in 'struct mm_struct'
> > > arch/arm64/kernel/elfcore.c:95:2: error: no member named 'vm_next' in 'struct vm_area_struct'
> >
> > This was fixed in linux-next by commit 3a4f7ef4bed5 [1]. Using the same
> > patch fixes this issue, although I will clean up the define prior to
> > inclusion in the patches.
>
> Thanks. With that commit, I was able to build and test on aarch64:
How did you hit this issue? Just on boot?
>
> ==================================================================
> BUG: KASAN: invalid-access in mas_destroy+0x10a4/0x126c
> Read of size 8 at addr 7bffff8015c1a110 by task CompositorTileW/9966
> Pointer tag: [7b], memory tag: [fe]
>
> CPU: 1 PID: 9966 Comm: CompositorTileW Not tainted 5.18.0-rc2-mm1-lockdep+ #2
> Call trace:
> dump_backtrace+0x1a0/0x200
> show_stack+0x24/0x30
> dump_stack_lvl+0x7c/0xa0
> print_report+0x15c/0x524
> kasan_report+0x84/0xb4
> kasan_tag_mismatch+0x28/0x3c
> __hwasan_tag_mismatch+0x30/0x60
> mas_destroy+0x10a4/0x126c
> mas_nomem+0x40/0xf4
> mas_store_gfp+0x9c/0xfc
> do_mas_align_munmap+0x344/0x688
> do_mas_munmap+0xf8/0x118
> __vm_munmap+0x154/0x1e0
> __arm64_sys_munmap+0x44/0x54
> el0_svc_common+0xfc/0x1cc
> do_el0_svc_compat+0x38/0x5c
> el0_svc_compat+0x68/0x118
> el0t_32_sync_handler+0xc0/0xf0
> el0t_32_sync+0x190/0x194
>
> Allocated by task 9966:
> kasan_set_track+0x4c/0x7c
> __kasan_slab_alloc+0x84/0xa8
> kmem_cache_alloc_bulk+0x300/0x408
> mas_alloc_nodes+0x188/0x268
> mas_nomem+0x88/0xf4
> mas_store_gfp+0x9c/0xfc
> do_mas_align_munmap+0x344/0x688
> do_mas_munmap+0xf8/0x118
> __vm_munmap+0x154/0x1e0
> __arm64_sys_munmap+0x44/0x54
> el0_svc_common+0xfc/0x1cc
> do_el0_svc_compat+0x38/0x5c
> el0_svc_compat+0x68/0x118
> el0t_32_sync_handler+0xc0/0xf0
> el0t_32_sync+0x190/0x194
>
> Freed by task 9966:
> kasan_set_track+0x4c/0x7c
> kasan_set_free_info+0x2c/0x38
> ____kasan_slab_free+0x13c/0x184
> __kasan_slab_free+0x14/0x24
> slab_free_freelist_hook+0x100/0x1ac
> kmem_cache_free_bulk+0x230/0x3b0
> mas_destroy+0x10d4/0x126c
> mas_nomem+0x40/0xf4
> mas_store_gfp+0x9c/0xfc
> do_mas_align_munmap+0x344/0x688
> do_mas_munmap+0xf8/0x118
> __vm_munmap+0x154/0x1e0
> __arm64_sys_munmap+0x44/0x54
> el0_svc_common+0xfc/0x1cc
> do_el0_svc_compat+0x38/0x5c
> el0_svc_compat+0x68/0x118
> el0t_32_sync_handler+0xc0/0xf0
> el0t_32_sync+0x190/0x194
>
> The buggy address belongs to the object at ffffff8015c1a100
> which belongs to the cache maple_node of size 256
> The buggy address is located 16 bytes inside of
> 256-byte region [ffffff8015c1a100, ffffff8015c1a200)
>
> The buggy address belongs to the physical page:
> page:fffffffe00570600 refcount:1 mapcount:0 mapping:0000000000000000
> index:0xa8ffff8015c1ad00 pfn:0x95c18
> head:fffffffe00570600 order:3 compound_mapcount:0 compound_pincount:0
> flags: 0x10200(slab|head|zone=0|kasantag=0x0)
> raw: 0000000000010200 6cffff8080030850 fffffffe003ec608 dbffff8080016280
> raw: a8ffff8015c1ad00 000000000020001e 00000001ffffffff 0000000000000000
> page dumped because: kasan: bad access detected
>
> Memory state around the buggy address:
> ffffff8015c19f00: fe fe fe fe fe fe fe fe fe fe fe fe fe fe fe fe
> ffffff8015c1a000: fe fe fe fe fe fe fe fe fe fe fe fe fe fe fe fe
> >ffffff8015c1a100: fe fe fe fe fe fe fe fe fe fe fe fe fe fe fe fe
> ^
> ffffff8015c1a200: fe fe fe fe fe fe fe fe fe fe fe fe fe fe fe fe
> ffffff8015c1a300: fe fe fe fe fe fe fe fe fe fe fe fe fe fe fe fe
> ==================================================================
Powered by blists - more mailing lists