[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3d44f029-7870-f560-786f-f3a55b3ec0c6@huawei.com>
Date: Thu, 7 Dec 2017 09:46:59 +0800
From: Yisheng Xie <xieyisheng1@...wei.com>
To: Will Deacon <will.deacon@....com>,
chenjiankang <chenjiankang1@...wei.com>
CC: <catalin.marinas@....com>, <linux-kernel@...r.kernel.org>,
<wangkefeng.wang@...wei.com>
Subject: Re: a racy access flag clearing warning when calling mmap system call
Hi Will,
On 2017/12/1 21:18, Will Deacon wrote:
> On Fri, Dec 01, 2017 at 03:38:04PM +0800, chenjiankang wrote:
>> ------------[ cut here ]------------
>> WARNING: at ../../../../../kernel/linux-4.1/arch/arm64/include/asm/pgtable.h:211
>
> Given that this is a fairly old 4.1 kernel, could you try to reproduce the
> failure with something more recent, please? We've fixed many bugs since
> then, some of them involving huge pages.
Yeah, this is and old kernel, but I find a scene that will cause this warn_on:
When fork and dup_mmap, it will call copy_huge_pmd() and clear the Access Flag.
dup_mmap
-> copy_page_range
-> copy_pud_range
-> copy_pmd_range
-> copy_huge_pmd
-> pmd_mkold
If we do not have any access after dup_mmap, and start to split this thp,
it will cause this call trace in the old kernel, right?
It seems this is normal scene but will cause call trace for this old kernel,
therefore, for this old kernel, we should just remove this WARN_ON_ONCE, right?
Thanks
Yisheng Xie
>
> Thanks,
>
> Will
>
>> Modules linked in:
>> CPU: 0 PID: 3665 Comm: syz-executor7 Not tainted 4.1.44+ #1
>> Hardware name: linux,dummy-virt (DT)
>> task: ffffffc06a873fc0 ti: ffffffc05aefc000 task.ti: ffffffc05aefc000
>> PC is at pmdp_splitting_flush+0x194/0x1b0
>> LR is at pmdp_splitting_flush+0x194/0x1b0
>> pc : [<ffffffc0000a5244>] lr : [<ffffffc0000a5244>] pstate: 80000145
>> sp : ffffffc05aeff770
>> x29: ffffffc05aeff770 x28: ffffffc05ae45800
>> x27: 0000000020200000 x26: ffffffc061fdf450
>> x25: 0000000000020000 x24: 0000000000000001
>> x23: ffffffc06333d9c8 x22: ffffffc0014ba000
>> x21: 01e000009d200bd1 x20: 00e000009d200bd1
>> x19: ffffffc05ae45800 x18: 0000000000000000
>> x17: 00000000004b4000 x16: ffffffc00017fdc0
>> x15: 00000000038ee280 x14: 3030653130203e2d
>> x13: 2031646230303264 x12: 3930303030306530
>> x11: 30203a676e697261 x10: 656c632067616c66
>> x9 : 2073736563636120 x8 : 79636172203a7461
>> x7 : ffffffc05aeff430 x6 : ffffffc00015f38c
>> x5 : 0000000000000003 x4 : 0000000000000000
>> x3 : 0000000000000003 x2 : 0000000000010000
>> x1 : ffffff9005a03000 x0 : 000000000000004b
>> Call trace:
>> [<ffffffc0000a5244>] pmdp_splitting_flush+0x194/0x1b0
>> [<ffffffc0002784c0>] split_huge_page_to_list+0x168/0xdb0
>> [<ffffffc00027a0d8>] __split_huge_page_pmd+0x1b0/0x510
>> [<ffffffc00027b5ac>] split_huge_page_pmd_mm+0x84/0x88
>> [<ffffffc00027b674>] split_huge_page_address+0xc4/0xe8
>> [<ffffffc00027b7f4>] __vma_adjust_trans_huge+0x15c/0x190
>> [<ffffffc000238d5c>] vma_adjust+0x884/0x9f0
>> [<ffffffc0002390c8>] __split_vma.isra.5+0x200/0x310
>> [<ffffffc00023b7b8>] do_munmap+0x5e0/0x608
>> [<ffffffc00023c0d4>] mmap_region+0x12c/0x900
>> [<ffffffc00023cd2c>] do_mmap_pgoff+0x484/0x540
>> [<ffffffc000218118>] vm_mmap_pgoff+0x128/0x158
>> [<ffffffc000239920>] SyS_mmap_pgoff+0x188/0x300
>> [<ffffffc00008cce0>] sys_mmap+0x58/0x80
>>
>
> .
>
Powered by blists - more mailing lists