[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <bdc89d09-ce50-43a4-9043-3ca6a9245eb4@collabora.com>
Date: Fri, 2 May 2025 09:15:10 +0500
From: Muhammad Usama Anjum <usama.anjum@...labora.com>
To: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
Cc: usama.anjum@...labora.com,
Manivannan Sadhasivam <manivannan.sadhasivam@...aro.org>,
Jeff Johnson <jjohnson@...nel.org>, Jeff Hugo <jeff.hugo@....qualcomm.com>,
Youssef Samir <quic_yabdulra@...cinc.com>,
Matthew Leung <quic_mattleun@...cinc.com>, Yan Zhen <yanzhen@...o.com>,
Alex Elder <elder@...nel.org>,
Jacek Lawrynowicz <jacek.lawrynowicz@...ux.intel.com>,
Kunwu Chan <chentao@...inos.cn>, Troy Hanson <quic_thanson@...cinc.com>,
"Dr. David Alan Gilbert" <linux@...blig.org>, kernel@...labora.com,
mhi@...ts.linux.dev, linux-arm-msm@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-wireless@...r.kernel.org,
ath11k@...ts.infradead.org, ath12k@...ts.infradead.org
Subject: Re: [PATCH v3] bus: mhi: host: don't free bhie tables during
suspend/hibernation
Hi Greg,
On 5/1/25 9:00 PM, Greg Kroah-Hartman wrote:
> On Tue, Apr 29, 2025 at 05:20:56PM +0500, Muhammad Usama Anjum wrote:
>> Fix dma_direct_alloc() failure at resume time during bhie_table
>> allocation. There is a crash report where at resume time, the memory
>> from the dma doesn't get allocated and MHI fails to re-initialize.
>> There is fragmentation/memory pressure.
>>
>> To fix it, don't free the memory at power down during suspend /
>> hibernation. Instead, use the same allocated memory again after every
>> resume / hibernation. This patch has been tested with resume and
>> hibernation both.
>>
>> The rddm is of constant size for a given hardware. While the fbc_image
>> size depends on the firmware. If the firmware changes, we'll free and
>> allocate new memory for it.
>>
>> Here are the crash logs:
>>
>> [ 3029.338587] mhi mhi0: Requested to power ON
>> [ 3029.338621] mhi mhi0: Power on setup success
>> [ 3029.668654] kworker/u33:8: page allocation failure: order:7, mode:0xc04(GFP_NOIO|GFP_DMA32), nodemask=(null),cpuset=/,mems_allowed=0
>> [ 3029.668682] CPU: 4 UID: 0 PID: 2744 Comm: kworker/u33:8 Not tainted 6.11.11-valve10-1-neptune-611-gb69e902b4338 #1ed779c892334112fb968aaa3facf9686b5ff0bd7
>> [ 3029.668690] Hardware name: Valve Galileo/Galileo, BIOS F7G0112 08/01/2024
>> [ 3029.668694] Workqueue: mhi_hiprio_wq mhi_pm_st_worker [mhi]
>> [ 3029.668717] Call Trace:
>> [ 3029.668722] <TASK>
>> [ 3029.668728] dump_stack_lvl+0x4e/0x70
>> [ 3029.668738] warn_alloc+0x164/0x190
>> [ 3029.668747] ? srso_return_thunk+0x5/0x5f
>> [ 3029.668754] ? __alloc_pages_direct_compact+0xaf/0x360
>> [ 3029.668761] __alloc_pages_slowpath.constprop.0+0xc75/0xd70
>> [ 3029.668774] __alloc_pages_noprof+0x321/0x350
>> [ 3029.668782] __dma_direct_alloc_pages.isra.0+0x14a/0x290
>> [ 3029.668790] dma_direct_alloc+0x70/0x270
>> [ 3029.668796] mhi_alloc_bhie_table+0xe8/0x190 [mhi faa917c5aa23a5f5b12d6a2c597067e16d2fedc0]
>> [ 3029.668814] mhi_fw_load_handler+0x1bc/0x310 [mhi faa917c5aa23a5f5b12d6a2c597067e16d2fedc0]
>> [ 3029.668830] mhi_pm_st_worker+0x5c8/0xaa0 [mhi faa917c5aa23a5f5b12d6a2c597067e16d2fedc0]
>> [ 3029.668844] ? srso_return_thunk+0x5/0x5f
>> [ 3029.668853] process_one_work+0x17e/0x330
>> [ 3029.668861] worker_thread+0x2ce/0x3f0
>> [ 3029.668868] ? __pfx_worker_thread+0x10/0x10
>> [ 3029.668873] kthread+0xd2/0x100
>> [ 3029.668879] ? __pfx_kthread+0x10/0x10
>> [ 3029.668885] ret_from_fork+0x34/0x50
>> [ 3029.668892] ? __pfx_kthread+0x10/0x10
>> [ 3029.668898] ret_from_fork_asm+0x1a/0x30
>> [ 3029.668910] </TASK>
>>
>> Tested-on: WCN6855 WLAN.HSP.1.1-03926.13-QCAHSPSWPL_V2_SILICONZ_CE-2.52297.6
>>
>> Signed-off-by: Muhammad Usama Anjum <usama.anjum@...labora.com>
>
> What commit id does this fix? Should it go to stable kernel(s)? If so,
> how far back?
This patch is fixing the dma_coherent_alloc() failure when there is
memory pressure and its unable to allocate memory. Its not a bug in
allocation API or the driver. I think it should be considered an
improvement instead of the fix. Please correct me if I'm wrong.
--
Regards,
Usama
Powered by blists - more mailing lists