[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <ca66ecce-b8eb-ad22-2b25-bad8552ea5a4@linux.alibaba.com>
Date: Tue, 13 Jun 2023 14:42:13 +0800
From: Shuai Xue <xueshuai@...ux.alibaba.com>
To: Leo Yan <leo.yan@...aro.org>, James Clark <james.clark@....com>
Cc: alexander.shishkin@...ux.intel.com, peterz@...radead.org,
kirill@...temov.name, mingo@...hat.com, acme@...nel.org,
mark.rutland@....com, jolsa@...nel.org, namhyung@...nel.org,
irogers@...gle.com, adrian.hunter@...el.com,
linux-perf-users@...r.kernel.org, linux-kernel@...r.kernel.org,
bpf@...r.kernel.org
Subject: Re: [PATCH 2/2] perf/ring_buffer: Fix high-order allocations for AUX
space with correct MAX_ORDER limit
On 2023/6/12 17:09, Leo Yan wrote:
> On Mon, Jun 12, 2023 at 09:45:38AM +0100, James Clark wrote:
>
> [...]
>
>>> @@ -609,8 +609,8 @@ static struct page *rb_alloc_aux_page(int node, int order)
>>> {
>>> struct page *page;
>>>
>>> - if (order > MAX_ORDER)
>>> - order = MAX_ORDER;
>>> + if (order >= MAX_ORDER)
>>> + order = MAX_ORDER - 1;
>>>
>>> do {
>>> page = alloc_pages_node(node, PERF_AUX_GFP, order);
>>
>>
>> It seems like this was only just recently changed with this as the
>> commit message (23baf83):
>>
>> mm, treewide: redefine MAX_ORDER sanely
>>
>> MAX_ORDER currently defined as number of orders page allocator
>> supports: user can ask buddy allocator for page order between 0 and
>> MAX_ORDER-1.
>>
>> This definition is counter-intuitive and lead to number of bugs all
>> over the kernel.
>>
>> Change the definition of MAX_ORDER to be inclusive: the range of
>> orders user can ask from buddy allocator is 0..MAX_ORDER now.
>>
>> It might be worth referring to this in the commit message or adding a
>> fixes: reference. Or maybe this new change isn't quite right?
>
> Good point. If so, we don't need this patch anymore.
>
> Thanks for reminding, James.
>
> Leo
Hi, Leo and James,
I tested on the Linus master tree, the mentioned commit 23baf83 ("mm, treewide: redefine MAX_ORDER sanely")
has fix this oops.
I will drop out this patch, thank you :)
Cheers,
Shuai
Powered by blists - more mailing lists