[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <4a66eea2-c23b-4c34-a5c6-508bf2a6fc47@app.fastmail.com>
Date: Thu, 22 Aug 2024 16:39:51 +0000
From: "Arnd Bergmann" <arnd@...db.de>
To: "Tomas Winkler" <tomas.winkler@...el.com>,
"Rohit Agarwal" <rohiagar@...omium.org>,
"Greg Kroah-Hartman" <gregkh@...uxfoundation.org>
Cc: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"Lubart, Vitaly" <vitaly.lubart@...el.com>,
"Alexander Usyskin" <alexander.usyskin@...el.com>
Subject: Re: [RFC] Order 4 allocation failures in the MEI client driver
On Thu, Aug 22, 2024, at 13:27, Winkler, Tomas wrote:
>> On Wed, Aug 21, 2024, at 05:20, Rohit Agarwal wrote:
>> > On 19/08/24 6:45 PM, Arnd Bergmann wrote:
>> >> On Tue, Aug 13, 2024, at 10:45, Rohit Agarwal wrote:
>> >>
>> >> What is the call chain you see in the kernel messages? Is it always
>> >> the same?
>> > Yes the call stack is same everytime. This is the call stack
>> >
>> > <4>[ 2019.101352] dump_stack_lvl+0x69/0xa0 <4>[ 2019.101359]
>> > warn_alloc+0x10d/0x180 <4>[ 2019.101363]
>> > __alloc_pages_slowpath+0xe3d/0xe80
>> > <4>[ 2019.101366] __alloc_pages+0x22f/0x2b0 <4>[ 2019.101369]
>> > __kmalloc_large_node+0x9d/0x120 <4>[ 2019.101373] ?
>> > mei_cl_alloc_cb+0x34/0xa0 <4>[ 2019.101377] ?
>> > mei_cl_alloc_cb+0x74/0xa0 <4>[ 2019.101379] __kmalloc+0x86/0x130 <4>[
>> > 2019.101382] mei_cl_alloc_cb+0x74/0xa0 <4>[ 2019.101385]
>> > mei_cl_enqueue_ctrl_wr_cb+0x38/0x90
>>
>> Ok, so this might be a result of mei_cl_enqueue_ctrl_wr_cb() doing
>>
>> /* for RX always allocate at least client's mtu */
>> if (length)
>> length = max_t(size_t, length, mei_cl_mtu(cl));
>>
>> which was added in 3030dc056459 ("mei: add wrapper for queuing control
>> commands."). All the callers seem to be passing a short "length" of just a few
>> bytes, but this would always extend it to
>> cl->me_cl->props.max_msg_length in mei_cl_mtu().
>>
>> Not sure where that part is set.
>
> It's allocating maximum header for receiving buffer so it can
> accommodate any response.
> Looks like this part can be optimized with pre allocated buffer pool.
I understand that it's always trying to allocate the maximum, the
question is whether there is ever a need to set the maximum to more
than a page. Pre-allocating a buffer at probe time would also
address the issue, but if it's possible to just make that buffer
smaller, it wouldn't be needed.
Is the 64KB buffer size part of the Chrome specific interface as
well, or is that part of the upstream kernel implementation?
Arnd
Powered by blists - more mailing lists