[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <CZF1Z7XP7TZD.3IY7CMWHUYZNC@bootlin.com>
Date: Mon, 26 Feb 2024 14:42:51 +0100
From: Théo Lebrun <theo.lebrun@...tlin.com>
To: "Mark Brown" <broonie@...nel.org>, "Dhruva Gole" <d-gole@...com>
Cc: "Apurva Nandan" <a-nandan@...com>, <linux-spi@...r.kernel.org>,
<linux-kernel@...r.kernel.org>, "Gregory CLEMENT"
<gregory.clement@...tlin.com>, "Vladimir Kondratiev"
<vladimir.kondratiev@...ileye.com>, "Thomas Petazzoni"
<thomas.petazzoni@...tlin.com>, "Tawfik Bayouk"
<tawfik.bayouk@...ileye.com>, "Nishanth" <nm@...com>, "Vignesh"
<vigneshr@...com>
Subject: Re: [PATCH v4 0/4] spi: cadence-qspi: Fix runtime PM and
system-wide suspend
Hello,
On Mon Feb 26, 2024 at 2:40 PM CET, Mark Brown wrote:
> On Mon, Feb 26, 2024 at 01:27:57PM +0000, Mark Brown wrote:
> > On Mon, Feb 26, 2024 at 05:48:03PM +0530, Dhruva Gole wrote:
> > > On Feb 22, 2024 at 19:13:29 +0000, Mark Brown wrote:
>
> > [ 1.709414] Call trace:
> > [ 1.711852] __mutex_lock.constprop.0+0x84/0x540
> > [ 1.716460] __mutex_lock_slowpath+0x14/0x20
> > [ 1.720719] mutex_lock+0x48/0x54
> > [ 1.724026] spi_controller_suspend+0x30/0x7c
> > [ 1.728377] cqspi_suspend+0x1c/0x6c
> > [ 1.731944] pm_generic_runtime_suspend+0x2c/0x44
> > [ 1.736640] genpd_runtime_suspend+0xa8/0x254
>
> > (it's generally helpful to provide the most relevant section directly.)
>
> > The issue here appears to be that we've registered for runtime suspend
> > prior to registering the controller...
>
> Actually, no - after this series cqspi_suspend() is the system not
> runtime PM operation and should not be called from runtime suspend. How
> is that happening?
You might have seen my answer by now. This series is not in the tags
quoted. I believe the memory corruption I fixed with this series is
being encountered for the first time on TI hardware. They probably did
not encounter it previously by luck.
Regards,
--
Théo Lebrun, Bootlin
Embedded Linux and Kernel engineering
https://bootlin.com
------------------------------------------------------------------------
Powered by blists - more mailing lists