[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20211025113007.6d09bfed.john@metanate.com>
Date: Mon, 25 Oct 2021 11:30:07 +0100
From: John Keeping <john@...anate.com>
To: "Rafael J. Wysocki" <rafael@...nel.org>
Cc: linux-rt-users@...r.kernel.org, Pavel Machek <pavel@....cz>,
Len Brown <len.brown@...el.com>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Linux PM <linux-pm@...r.kernel.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...nel.org>,
Peter Zijlstra <peterz@...radead.org>
Subject: Re: [RFC PATCH RT] PM: runtime: avoid retry loops on RT
On Thu, 21 Oct 2021 12:37:05 +0200
"Rafael J. Wysocki" <rafael@...nel.org> wrote:
> The initial motivation for adding irq_safe was to allow interrupt
> handlers of some devices to use PM-runtime, but in RT kernels that's
> possible regardless IIUC, so I don't see a reason for having irq_safe
> at all in that case.
I coded up the "no irq_safe" version but lockdep complains loudly about
it:
BUG: sleeping function called from invalid context at drivers/base/power/runtime.c:1111
in_atomic(): 0, irqs_disabled(): 0, non_block: 0, pid: 237, name: pm-runtime-prio
preempt_count: 0, expected: 0
RCU nest depth: 2, expected: 0
INFO: lockdep is turned off.
CPU: 3 PID: 237 Comm: pm-runtime-prio Tainted: G W 5.15.0-rc6-rt13 #1
Hardware name: Rockchip (Device Tree)
[<c010f9d0>] (unwind_backtrace) from [<c010afc8>] (show_stack+0x10/0x14)
[<c010afc8>] (show_stack) from [<c090ec30>] (dump_stack_lvl+0x58/0x70)
[<c090ec30>] (dump_stack_lvl) from [<c014bee0>] (__might_resched+0x1dc/0x270)
[<c014bee0>] (__might_resched) from [<c059a1a0>] (__pm_runtime_resume+0x2c/0x6c)
[<c059a1a0>] (__pm_runtime_resume) from [<c04b8a44>] (pl330_issue_pending+0x60/0x84)
[<c04b8a44>] (pl330_issue_pending) from [<c07306b8>] (snd_dmaengine_pcm_trigger+0xec/0x14c)
[<c07306b8>] (snd_dmaengine_pcm_trigger) from [<c0767528>] (soc_component_trigger+0x20/0x38)
[<c0767528>] (soc_component_trigger) from [<c0768440>] (snd_soc_pcm_component_trigger+0xd8/0xf4)
[<c0768440>] (snd_soc_pcm_component_trigger) from [<c0768e34>] (soc_pcm_trigger+0x48/0x154)
[<c0768e34>] (soc_pcm_trigger) from [<c0725f74>] (snd_pcm_action_single+0x38/0x64)
[<c0725f74>] (snd_pcm_action_single) from [<c0727f28>] (snd_pcm_action+0x5c/0x60)
[<c0727f28>] (snd_pcm_action) from [<c0727f68>] (snd_pcm_action_lock_irq+0x28/0x3c)
[<c0727f68>] (snd_pcm_action_lock_irq) from [<c027f474>] (vfs_ioctl+0x20/0x38)
[<c027f474>] (vfs_ioctl) from [<c027fe54>] (sys_ioctl+0xc0/0x96c)
[<c027fe54>] (sys_ioctl) from [<c0100060>] (ret_fast_syscall+0x0/0x1c)
Now that I have a reliable reproducer, it turns out the original patch
in this thread also has problems and causes a WARN from RCU. The
version I have now that seems to work and doesn't cause any dmesg
complaints is below, but I'm really not sure if this is considered an
acceptable use of schedule_rtlock() (I suspect this also fails to
compile without CONFIG_PREEMPT_RT since schedule_rtlock() isn't declared
in that case).
-- >8 --
diff --git a/drivers/base/power/runtime.c b/drivers/base/power/runtime.c
index ec94049442b9..79cf9997f71b 100644
--- a/drivers/base/power/runtime.c
+++ b/drivers/base/power/runtime.c
@@ -596,7 +596,7 @@ static int rpm_suspend(struct device *dev, int rpmflags)
goto out;
}
- if (dev->power.irq_safe) {
+ if (dev->power.irq_safe && !IS_ENABLED(CONFIG_PREEMPT_RT)) {
spin_unlock(&dev->power.lock);
cpu_relax();
@@ -614,7 +614,10 @@ static int rpm_suspend(struct device *dev, int rpmflags)
spin_unlock_irq(&dev->power.lock);
- schedule();
+ if (dev->power.irq_safe)
+ schedule_rtlock();
+ else
+ schedule();
spin_lock_irq(&dev->power.lock);
}
@@ -777,7 +780,7 @@ static int rpm_resume(struct device *dev, int rpmflags)
goto out;
}
- if (dev->power.irq_safe) {
+ if (dev->power.irq_safe && !IS_ENABLED(CONFIG_PREEMPT_RT)) {
spin_unlock(&dev->power.lock);
cpu_relax();
@@ -796,7 +799,10 @@ static int rpm_resume(struct device *dev, int rpmflags)
spin_unlock_irq(&dev->power.lock);
- schedule();
+ if (dev->power.irq_safe)
+ schedule_rtlock();
+ else
+ schedule();
spin_lock_irq(&dev->power.lock);
}
Powered by blists - more mailing lists