lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87k09r45ww.wl-maz@kernel.org>
Date:   Wed, 08 Jun 2022 14:54:23 +0100
From:   Marc Zyngier <maz@...nel.org>
To:     Lucas Stach <l.stach@...gutronix.de>
Cc:     Liu Ying <victor.liu@....com>, linux-kernel@...r.kernel.org,
        linux-arm-kernel@...ts.infradead.org,
        Thomas Gleixner <tglx@...utronix.de>,
        Shawn Guo <shawnguo@...nel.org>,
        Sascha Hauer <s.hauer@...gutronix.de>,
        Pengutronix Kernel Team <kernel@...gutronix.de>,
        Fabio Estevam <festevam@...il.com>,
        NXP Linux Team <linux-imx@....com>
Subject: Re: [PATCH] irqchip/irq-imx-irqsteer: Get/put PM runtime in ->irq_unmask()/irq_mask()

On Wed, 08 Jun 2022 13:02:46 +0100,
Lucas Stach <l.stach@...gutronix.de> wrote:
> 
> Am Mittwoch, dem 08.06.2022 um 19:29 +0800 schrieb Liu Ying:
> > On Wed, 2022-06-08 at 12:56 +0200, Lucas Stach wrote:
> > > Am Mittwoch, dem 08.06.2022 um 18:50 +0800 schrieb Liu Ying:
> > > > Now that runtime PM support was added in this driver, we have
> > > > to enable power before accessing irqchip registers.  And, after
> > > > the access is done, we should disable power.  This patch calls
> > > > pm_runtime_get_sync() in ->irq_unmask() and pm_runtime_put() in
> > > > ->irq_mask() to make sure power is managed for the register access.
> > > > 
> > > 
> > > Can you tell me in which case this is necessary? IIRC the IRQ core
> > 
> > With the i.MX8qxp DPU driver[1], I see below synchronous external
> > abort:
> > 
> > [    1.207270] Internal error: synchronous external abort: 96000210
> > [#1] PREEMPT SMP
> > [    1.207287] Modules linked in:
> > [    1.207299] CPU: 1 PID: 64 Comm: kworker/u8:2 Not tainted 5.18.0-
> > rc6-next-20220509-00053-gf01f74ee1c18 #272
> > [    1.207311] Hardware name: Freescale i.MX8QXP MEK (DT)
> > [    1.207319] Workqueue: events_unbound deferred_probe_work_func
> > [    1.207339] pstate: 400000c5 (nZcv daIF -PAN -UAO -TCO -DIT -SSBS
> > BTYPE=--)
> > [    1.207349] pc : imx_irqsteer_irq_unmask+0x48/0x80
> > [    1.207360] lr : imx_irqsteer_irq_unmask+0x38/0x80
> > [    1.207368] sp : ffff80000a88b900
> > [    1.207372] x29: ffff80000a88b900 x28: ffff8000080fed90 x27:
> > ffff8000080fefe0
> > [    1.207388] x26: ffff8000080fef40 x25: ffff0008012538d4 x24:
> > ffff8000092fe388
> > [    1.207407] x23: 0000000000000001 x22: ffff0008013295b4 x21:
> > ffff000801329580
> > [    1.207425] x20: ffff0008003faa60 x19: 000000000000000e x18:
> > 0000000000000000
> > [    1.207443] x17: 0000000000000003 x16: 0000000000000162 x15:
> > 0000000000000001
> > [    1.207459] x14: 0000000000000002 x13: 0000000000000018 x12:
> > 0000000000000040
> > [    1.207477] x11: ffff000800682480 x10: ffff000800682482 x9 :
> > ffff80000a072678
> > [    1.207495] x8 : ffff0008006a64a8 x7 : 0000000000000000 x6 :
> > ffff0008006a6608
> > [    1.207513] x5 : ffff800009070a18 x4 : 0000000000000000 x3 :
> > ffff80000b240000
> > [    1.207529] x2 : ffff80000b240038 x1 : 00000000000000c0 x0 :
> > 00000000000000c0
> > [    1.207549] Call trace:
> > [    1.207553]  imx_irqsteer_irq_unmask+0x48/0x80
> > [    1.207562]  irq_enable+0x40/0x8c
> > [    1.207575]  __irq_startup+0x78/0xa4
> > [    1.207588]  irq_startup+0x78/0x16c
> > [    1.207601]  irq_activate_and_startup+0x38/0x70
> > [    1.207612]  __irq_do_set_handler+0xcc/0x1e0
> > [    1.207626]  irq_set_chained_handler_and_data+0x58/0xa0
> 
> Ooh, I think this is the problem. The IRQ is not requested in the usual
> way when a chained handler is added, so this might bypass the runtime
> PM handling normally done in the IRQ core. In that case this is a core
> issue and should not be worked around in the driver, but the core
> should take the RPM reference for the chained handler, just like it
> does for normal IRQs.

Well spotted. Could you please give the hack below (compile-tested
only) a go?

Thanks,

	M.

From 1426cadd87717f1d876c7563f2a29b00283a847e Mon Sep 17 00:00:00 2001
From: Marc Zyngier <maz@...nel.org>
Date: Wed, 8 Jun 2022 14:45:35 +0100
Subject: [PATCH] genirq: PM: Use runtime PM for chained interrupts

When requesting an interrupt, we correctly call into the runtime
PM framework to guarantee that the underlying interrupt controller
is up and running.

However, we fail to do so for chained interrupt controllers, as
the mux interrupt is not requested along the same path.

Augment __irq_do_set_handler() to call into the runtime PM code
in this case, making sure the PM flow is the same for all interrupts.

Reported-by: Lucas Stach <l.stach@...gutronix.de>
Signed-off-by: Marc Zyngier <maz@...nel.org>
Link: https://lore.kernel.org/r/26973cddee5f527ea17184c0f3fccb70bc8969a0.camel@pengutronix.de
---
 kernel/irq/chip.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/kernel/irq/chip.c b/kernel/irq/chip.c
index e6b8e564b37f..886789dcee43 100644
--- a/kernel/irq/chip.c
+++ b/kernel/irq/chip.c
@@ -1006,8 +1006,10 @@ __irq_do_set_handler(struct irq_desc *desc, irq_flow_handler_t handle,
 		if (desc->irq_data.chip != &no_irq_chip)
 			mask_ack_irq(desc);
 		irq_state_set_disabled(desc);
-		if (is_chained)
+		if (is_chained) {
 			desc->action = NULL;
+			WARN_ON(irq_chip_pm_put(irq_desc_get_irq_data(desc)));
+		}
 		desc->depth = 1;
 	}
 	desc->handle_irq = handle;
@@ -1033,6 +1035,7 @@ __irq_do_set_handler(struct irq_desc *desc, irq_flow_handler_t handle,
 		irq_settings_set_norequest(desc);
 		irq_settings_set_nothread(desc);
 		desc->action = &chained_action;
+		WARN_ON(irq_chip_pm_get(irq_desc_get_irq_data(desc)));
 		irq_activate_and_startup(desc, IRQ_RESEND);
 	}
 }
-- 
2.34.1


-- 
Without deviation from the norm, progress is not possible.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ