[<prev] [next>] [day] [month] [year] [list]
Message-ID: <fef9f0e9-33d6-4e64-80b5-095b4794d7c8@vivo.com>
Date: Fri, 26 Apr 2024 10:59:51 +0800
From: Huan Yang <link@...o.com>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-mm@...ck.org" <linux-mm@...ck.org>, "kernel@...o.com"
<kernel@...o.com>,
"syzbot+b07d8440edb5f8988eea@...kaller.appspotmail.com"
<syzbot+b07d8440edb5f8988eea@...kaller.appspotmail.com>,
Wang Qing <wangqing@...o.com>
Subject: Re: [PATCH] mm/page_alloc: fix alloc_pages_bulk/set_page_owner panic
on irq disabled
HI Andrew
在 2024/4/25 17:52, 杨欢 写道:
>>> The problem is caused by set_page_owner alloc memory to save stack with
>>> GFP_KERNEL in local_riq disabled.
>>> So, we just can't assume that alloc flags should be same with new page,
>>> let's split it. But in most situation, same is ok, in alloc_pages_bulk,
>>> input GFP_ATOMIC when prep_new_pages
>> Please more fully describe the bug which is being fixed. A link to the
>> sysbot report would be helpful. I assume there was a stack backtrace
>> available? Seeing the will help others to understand the bug.
> Sorry, here is the backtrace:
> __dump_stack lib/dump_stack.c:79 [inline]
> dump_stack_lvl+0xcd/0x134 lib/dump_stack.c:96
> ___might_sleep.cold+0x1f1/0x237 kernel/sched/core.c:9153
> prepare_alloc_pages+0x3da/0x580 mm/page_alloc.c:5179
> __alloc_pages+0x12f/0x500 mm/page_alloc.c:5375
> alloc_pages+0x18c/0x2a0 mm/mempolicy.c:2272
> stack_depot_save+0x39d/0x4e0 lib/stackdepot.c:303
> save_stack+0x15e/0x1e0 mm/page_owner.c:120
> __set_page_owner+0x50/0x290 mm/page_owner.c:181
> prep_new_page mm/page_alloc.c:2445 [inline]
> __alloc_pages_bulk+0x8b9/0x1870 mm/page_alloc.c:5313
Thanks for your reply, but this patch was submitted in 2021, at that
time it was believed that bypassing
alloc bulk would suffice when page_owner is enabled, so this patch was
not considered.
Have you recently encountered any issues related to this?
Thanks.
>> And if you are able to identify the patch which introduced the bug, a
>> Fixes: tag would be helpful as well.
>> Thanks.
>>
>
>
>
Powered by blists - more mailing lists