lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87wnkh26ar.wl-maz@kernel.org>
Date:   Mon, 06 Dec 2021 13:48:12 +0000
From:   Marc Zyngier <maz@...nel.org>
To:     Shawn Guo <shawn.guo@...aro.org>
Cc:     Thomas Gleixner <tglx@...utronix.de>,
        Maulik Shah <quic_mkshah@...cinc.com>,
        Bjorn Andersson <bjorn.andersson@...aro.org>,
        Loic Poulain <loic.poulain@...aro.org>,
        linux-arm-msm@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3 3/3] irqchip: Add Qualcomm MPM controller driver

On Mon, 06 Dec 2021 13:15:31 +0000,
Shawn Guo <shawn.guo@...aro.org> wrote:
> 
> On Mon, Dec 06, 2021 at 09:46:29AM +0000, Marc Zyngier wrote:
> [...]
> > > +/* Triggered by RPM when system resumes from deep sleep */
> > > +static irqreturn_t qcom_mpm_handler(int irq, void *dev_id)
> > > +{
> > > +	struct qcom_mpm_priv *priv = dev_id;
> > > +	unsigned long enable, pending;
> > > +	int i, j;
> > > +
> > > +	for (i = 0; i < priv->reg_stride; i++) {
> > > +		enable = qcom_mpm_read(priv, MPM_REG_ENABLE, i);
> > > +		pending = qcom_mpm_read(priv, MPM_REG_STATUS, i);
> > > +		pending &= enable;
> > > +
> > > +		for_each_set_bit(j, &pending, 32) {
> > > +			unsigned int pin = 32 * i + j;
> > > +			struct irq_desc *desc =
> > > +					irq_resolve_mapping(priv->domain, pin);
> > 
> > Assignments on a single line, please.
> 
> This is to avoid over 80 columns check patch warning.

I don't care about what checkpatch says. This is unreadable and
ungrepable.

> 
> > 
> > > +			struct irq_data *d = &desc->irq_data;
> > > +
> > > +			if (!irqd_is_level_type(d))
> > > +				irq_set_irqchip_state(d->irq,
> > > +						IRQCHIP_STATE_PENDING, true);
> > > +
> > > +		}
> > > +	}
> > > +
> > > +	return IRQ_HANDLED;
> > > +}
> > > +
> > > +static int qcom_mpm_enter_sleep(struct qcom_mpm_priv *priv)
> > > +{
> > > +	int i, ret;
> > > +
> > > +	for (i = 0; i < priv->reg_stride; i++)
> > > +		qcom_mpm_write(priv, MPM_REG_STATUS, i, 0);
> > > +
> > > +	/* Notify RPM to write vMPM into HW */
> > 
> > What do you mean by 'into HW'? We just did that, right? or are these
> > registers just fake and most of the stuff is in the RPM?
> 
> I have a note about this in commit log.
> 
> - All the register settings are done by APSS on an internal memory
>   region called vMPM, and RPM will flush them into hardware after it
>   receives a mailbox/IPC notification from APSS.
> 
> So yes, these registers are fake/virtual in memory, and RPM will
> actually flush the values into the MPM hardware block.

Then why are you using MMIO accessors all over the place if this is
just RAM? Who *owns* this memory? Is it normal DRAM? Or some flops
exposed by a device? Why isn't the state simply communicated over the
mailbox instead?

> 
> > 
> > > +	ret = mbox_send_message(priv->mbox_chan, NULL);
> > > +	if (ret < 0)
> > > +		return ret;
> > > +
> > > +	return 0;
> > > +}
> > > +
> > > +static int qcom_mpm_cpu_pm_callback(struct notifier_block *nb,
> > > +				    unsigned long action, void *data)
> > > +{
> > > +	struct qcom_mpm_priv *priv = container_of(nb, struct qcom_mpm_priv,
> > > +						  pm_nb);
> > > +	int ret = NOTIFY_OK;
> > > +	int cpus_in_pm;
> > > +
> > > +	switch (action) {
> > > +	case CPU_PM_ENTER:
> > > +		cpus_in_pm = atomic_inc_return(&priv->cpus_in_pm);
> > > +		/*
> > > +		 * NOTE: comments for num_online_cpus() point out that it's
> > > +		 * only a snapshot so we need to be careful. It should be OK
> > > +		 * for us to use, though.  It's important for us not to miss
> > > +		 * if we're the last CPU going down so it would only be a
> > > +		 * problem if a CPU went offline right after we did the check
> > > +		 * AND that CPU was not idle AND that CPU was the last non-idle
> > > +		 * CPU. That can't happen. CPUs would have to come out of idle
> > > +		 * before the CPU could go offline.
> > > +		 */
> > > +		if (cpus_in_pm < num_online_cpus())
> > > +			return NOTIFY_OK;
> > > +		break;
> > 
> > I'm really not keen on this sort of tracking in an irqchip driver.
> > Nobody else needs this. This is also copy-pasted (without even
> > mentioning it) from rpmh_rsc_cpu_pm_callback(). Why is that logic
> > needed twice, given that the RPM is also involved is this sequence?
> 
> Yes, this is copy-pasted from rpmh-rsc and I should have mentioned that
> in some way.  But rpmh-rsc is a driver specific to RPM-Hardened (RPMH)
> which can be found on Qualcomm high-range SoCs like SDM845, SC7180.  And
> on these RPMH based SoCs, PDC works as the wake-up interrupt controller.
> However, on mid-range SoCs like SDM660, QCM2290, they run a different
> combination, that is RPM + MPM.  Note - RPMH and RPM are two similar but
> different and separate blocks.
> 
> What we are implementing is for RPM + MPM based SoCs, and rpmh-rsc is
> irrelevant here.  The same cpu_pm tracking logic can be reused though.

No, I really don't think this is the right place for this.

This is generic CPU power management code (finding out who is the last
man standing), and I don't want it buried into some random drivers. If
you have a need for this kind of logic, build it *outside* of the
driver infrastructure, and into the CPU PM code. Interrupt controllers
have no business with this stuff.

> > 
> > > +	case CPU_PM_ENTER_FAILED:
> > > +	case CPU_PM_EXIT:
> > > +		atomic_dec(&priv->cpus_in_pm);
> > > +		return NOTIFY_OK;
> > > +	default:
> > > +		return NOTIFY_DONE;
> > > +	}
> > > +
> > > +	/*
> > > +	 * It's likely we're on the last CPU. Grab the lock and write MPM for
> > > +	 * sleep. Grabbing the lock means that if we race with another CPU
> > > +	 * coming up we are still guaranteed to be safe.
> > > +	 */
> > > +	if (raw_spin_trylock(&priv->lock)) {
> > > +		if (qcom_mpm_enter_sleep(priv))
> > > +			ret = NOTIFY_BAD;
> > > +		raw_spin_unlock(&priv->lock);
> > > +	} else {
> > > +		/* Another CPU must be up */
> > > +		return NOTIFY_OK;
> > > +	}
> > > +
> > > +	if (ret == NOTIFY_BAD) {
> > > +		/* Double-check if we're here because someone else is up */
> > > +		if (cpus_in_pm < num_online_cpus())
> > > +			ret = NOTIFY_OK;
> > > +		else
> > > +			/* We won't be called w/ CPU_PM_ENTER_FAILED */
> > > +			atomic_dec(&priv->cpus_in_pm);
> > > +	}
> > > +
> > > +	return ret;
> > > +}
> 
> [...]
> 
> > > +/*
> > > + * The mapping between MPM_GIC pin and GIC SPI number on QCM2290.  It's taken
> > > + * from downstream qcom-mpm-scuba.c with a little transform on the GIC
> > > + * SPI numbers (the second column).  Due to the binding difference from
> > > + * the downstream, where GIC SPI numbering starts from 32, we expect the
> > > + * numbering starts from 0 here, and that's why we have the number minus 32
> > > + * comparing to the downstream.
> > 
> > This doesn't answer my earlier question: is this list complete? Or is
> > it likely to change?
> 
> Yes, it's complete for QCM2290.
> 
> > Also, why is that in not in the device tree,
> > given that it is likely to change from one SoC to another?
> 
> Yes, this is a SoC specific data, and will be different from one SoC to
> another.  Different subsystem maintainers have different views on such
> SoC data.  Some think they belong to kernel/driver and can be matched
> using SoC specific compatible, while others prefer to have them in DT.
> 
> As I mentioned in the commit log, this SoC data actually consists of two
> parts:
> 
>  1) Map of MPM_GIC pin <--> GIC interrupt number
>  2) Map of MPM_GPIO pin <--> GPIO number
> 
> Since we already have map 2) defined in pinctrl driver rather than DT,
> I put map 1) in MPM driver.

That's a pinctrl decision. I don't want more static tables in random
drivers on the irqchip side. You are just reinventing the board files
we got rid of 10 years ago, only spread all over the various drivers
instead of arch/arm/mach-*/... 

	M.

-- 
Without deviation from the norm, progress is not possible.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ