[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20220726145430.bfwidmw6xmeppbfb@bogus>
Date: Tue, 26 Jul 2022 15:54:30 +0100
From: Sudeep Holla <sudeep.holla@....com>
To: Mark Rutland <mark.rutland@....com>
Cc: linux-kernel@...r.kernel.org, linux-pm@...r.kernel.org,
linux-arm-kernel@...ts.infradead.org,
Sudeep Holla <sudeep.holla@....com>,
Peter Zijlstra <peterz@...radead.org>
Subject: Re: [-next] Lockdep warnings
On Tue, Jul 26, 2022 at 03:44:31PM +0100, Mark Rutland wrote:
> On Tue, Jul 26, 2022 at 11:41:34AM +0100, Sudeep Holla wrote:
> > I was seeing the below lockdep warnings on my arm64 Juno development
> > platform almost 2 weeks back with -next. I wanted to check for similar
> > reports before post and forgot.
>
> [...]
>
> > However I don't see the above warning with the latest -next. When I tried
> > yesterday's -next now, I see a different warning. Not sure if they are
> > related. I haven't tried to bisect.
> >
> > --->8
> > =============================
> > [ BUG: Invalid wait context ]
> > 5.19.0-rc8-next-20220725 #38 Not tainted
> > -----------------------------
> > swapper/0/0 is trying to lock:
> > (&drvdata->spinlock){....}-{3:3}, at: cti_cpu_pm_notify+0x54/0x114
>
> Hmmm... do you have CONFIG_PROVE_RAW_LOCK_NESTING enabled?
>
Yes.
> IIUC that should be {2:2} otherwise...
>
> > other info that might help us debug this:
> > context-{5:5}
> > 1 lock held by swapper/0/0:
> > #0: (cpu_pm_notifier.lock){....}-{2:2}, at: cpu_pm_enter+0x2c/0x80
>
> ... and this is telling us that we're trying to take a regular spinlock under a
> raw spinlock, which is not as intended.
>
> The Kconfig text notes:
>
> NOTE: There are known nesting problems. So if you enable this
> option expect lockdep splats until these problems have been fully
> addressed which is work in progress. This config switch allows to
> identify and analyze these problems. It will be removed and the
> check permanently enabled once the main issues have been fixed.
>
Ah, I hadn't seen or read this. Thanks for digging this and sharing.
Sorry for the noise. Good I got to know this limitation, will try to
remember this.
Thanks again for your time Mark.
--
Regards,
Sudeep
Powered by blists - more mailing lists