lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 6 Dec 2021 21:15:31 +0800
From:   Shawn Guo <shawn.guo@...aro.org>
To:     Marc Zyngier <maz@...nel.org>
Cc:     Thomas Gleixner <tglx@...utronix.de>,
        Maulik Shah <quic_mkshah@...cinc.com>,
        Bjorn Andersson <bjorn.andersson@...aro.org>,
        Loic Poulain <loic.poulain@...aro.org>,
        linux-arm-msm@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3 3/3] irqchip: Add Qualcomm MPM controller driver

On Mon, Dec 06, 2021 at 09:46:29AM +0000, Marc Zyngier wrote:
[...]
> > +/* Triggered by RPM when system resumes from deep sleep */
> > +static irqreturn_t qcom_mpm_handler(int irq, void *dev_id)
> > +{
> > +	struct qcom_mpm_priv *priv = dev_id;
> > +	unsigned long enable, pending;
> > +	int i, j;
> > +
> > +	for (i = 0; i < priv->reg_stride; i++) {
> > +		enable = qcom_mpm_read(priv, MPM_REG_ENABLE, i);
> > +		pending = qcom_mpm_read(priv, MPM_REG_STATUS, i);
> > +		pending &= enable;
> > +
> > +		for_each_set_bit(j, &pending, 32) {
> > +			unsigned int pin = 32 * i + j;
> > +			struct irq_desc *desc =
> > +					irq_resolve_mapping(priv->domain, pin);
> 
> Assignments on a single line, please.

This is to avoid over 80 columns check patch warning.

> 
> > +			struct irq_data *d = &desc->irq_data;
> > +
> > +			if (!irqd_is_level_type(d))
> > +				irq_set_irqchip_state(d->irq,
> > +						IRQCHIP_STATE_PENDING, true);
> > +
> > +		}
> > +	}
> > +
> > +	return IRQ_HANDLED;
> > +}
> > +
> > +static int qcom_mpm_enter_sleep(struct qcom_mpm_priv *priv)
> > +{
> > +	int i, ret;
> > +
> > +	for (i = 0; i < priv->reg_stride; i++)
> > +		qcom_mpm_write(priv, MPM_REG_STATUS, i, 0);
> > +
> > +	/* Notify RPM to write vMPM into HW */
> 
> What do you mean by 'into HW'? We just did that, right? or are these
> registers just fake and most of the stuff is in the RPM?

I have a note about this in commit log.

- All the register settings are done by APSS on an internal memory
  region called vMPM, and RPM will flush them into hardware after it
  receives a mailbox/IPC notification from APSS.

So yes, these registers are fake/virtual in memory, and RPM will
actually flush the values into the MPM hardware block.

> 
> > +	ret = mbox_send_message(priv->mbox_chan, NULL);
> > +	if (ret < 0)
> > +		return ret;
> > +
> > +	return 0;
> > +}
> > +
> > +static int qcom_mpm_cpu_pm_callback(struct notifier_block *nb,
> > +				    unsigned long action, void *data)
> > +{
> > +	struct qcom_mpm_priv *priv = container_of(nb, struct qcom_mpm_priv,
> > +						  pm_nb);
> > +	int ret = NOTIFY_OK;
> > +	int cpus_in_pm;
> > +
> > +	switch (action) {
> > +	case CPU_PM_ENTER:
> > +		cpus_in_pm = atomic_inc_return(&priv->cpus_in_pm);
> > +		/*
> > +		 * NOTE: comments for num_online_cpus() point out that it's
> > +		 * only a snapshot so we need to be careful. It should be OK
> > +		 * for us to use, though.  It's important for us not to miss
> > +		 * if we're the last CPU going down so it would only be a
> > +		 * problem if a CPU went offline right after we did the check
> > +		 * AND that CPU was not idle AND that CPU was the last non-idle
> > +		 * CPU. That can't happen. CPUs would have to come out of idle
> > +		 * before the CPU could go offline.
> > +		 */
> > +		if (cpus_in_pm < num_online_cpus())
> > +			return NOTIFY_OK;
> > +		break;
> 
> I'm really not keen on this sort of tracking in an irqchip driver.
> Nobody else needs this. This is also copy-pasted (without even
> mentioning it) from rpmh_rsc_cpu_pm_callback(). Why is that logic
> needed twice, given that the RPM is also involved is this sequence?

Yes, this is copy-pasted from rpmh-rsc and I should have mentioned that
in some way.  But rpmh-rsc is a driver specific to RPM-Hardened (RPMH)
which can be found on Qualcomm high-range SoCs like SDM845, SC7180.  And
on these RPMH based SoCs, PDC works as the wake-up interrupt controller.
However, on mid-range SoCs like SDM660, QCM2290, they run a different
combination, that is RPM + MPM.  Note - RPMH and RPM are two similar but
different and separate blocks.

What we are implementing is for RPM + MPM based SoCs, and rpmh-rsc is
irrelevant here.  The same cpu_pm tracking logic can be reused though.

> 
> > +	case CPU_PM_ENTER_FAILED:
> > +	case CPU_PM_EXIT:
> > +		atomic_dec(&priv->cpus_in_pm);
> > +		return NOTIFY_OK;
> > +	default:
> > +		return NOTIFY_DONE;
> > +	}
> > +
> > +	/*
> > +	 * It's likely we're on the last CPU. Grab the lock and write MPM for
> > +	 * sleep. Grabbing the lock means that if we race with another CPU
> > +	 * coming up we are still guaranteed to be safe.
> > +	 */
> > +	if (raw_spin_trylock(&priv->lock)) {
> > +		if (qcom_mpm_enter_sleep(priv))
> > +			ret = NOTIFY_BAD;
> > +		raw_spin_unlock(&priv->lock);
> > +	} else {
> > +		/* Another CPU must be up */
> > +		return NOTIFY_OK;
> > +	}
> > +
> > +	if (ret == NOTIFY_BAD) {
> > +		/* Double-check if we're here because someone else is up */
> > +		if (cpus_in_pm < num_online_cpus())
> > +			ret = NOTIFY_OK;
> > +		else
> > +			/* We won't be called w/ CPU_PM_ENTER_FAILED */
> > +			atomic_dec(&priv->cpus_in_pm);
> > +	}
> > +
> > +	return ret;
> > +}

[...]

> > +/*
> > + * The mapping between MPM_GIC pin and GIC SPI number on QCM2290.  It's taken
> > + * from downstream qcom-mpm-scuba.c with a little transform on the GIC
> > + * SPI numbers (the second column).  Due to the binding difference from
> > + * the downstream, where GIC SPI numbering starts from 32, we expect the
> > + * numbering starts from 0 here, and that's why we have the number minus 32
> > + * comparing to the downstream.
> 
> This doesn't answer my earlier question: is this list complete? Or is
> it likely to change?

Yes, it's complete for QCM2290.

> Also, why is that in not in the device tree,
> given that it is likely to change from one SoC to another?

Yes, this is a SoC specific data, and will be different from one SoC to
another.  Different subsystem maintainers have different views on such
SoC data.  Some think they belong to kernel/driver and can be matched
using SoC specific compatible, while others prefer to have them in DT.

As I mentioned in the commit log, this SoC data actually consists of two
parts:

 1) Map of MPM_GIC pin <--> GIC interrupt number
 2) Map of MPM_GPIO pin <--> GPIO number

Since we already have map 2) defined in pinctrl driver rather than DT,
I put map 1) in MPM driver.

Shawn

> 
> > + */
> > +const struct mpm_pin qcm2290_gic_pins[] = {
> > +	{ 2, 275 },	/* tsens0_tsens_upper_lower_int */
> > +	{ 5, 296 },	/* lpass_irq_out_sdc */
> > +	{ 12, 422 },	/* b3_lfps_rxterm_irq */
> > +	{ 24, 79 },	/* bi_px_lpi_1_aoss_mx */
> > +	{ 86, 183 },	/* mpm_wake,spmi_m */
> > +	{ 90, 260 },	/* eud_p0_dpse_int_mx */
> > +	{ 91, 260 },	/* eud_p0_dmse_int_mx */
> > +	{ -1 },
> > +};
> > +
> > +const struct mpm_data qcm2290_data = {
> > +	.pin_num = 96,
> > +	.gic_pins = qcm2290_gic_pins,
> > +};
> > +
> > +static int qcm2290_mpm_init(struct platform_device *pdev,
> > +			    struct device_node *parent)
> > +{
> > +	return qcom_mpm_init(pdev, parent, &qcm2290_data);
> > +}
> > +
> > +IRQCHIP_PLATFORM_DRIVER_BEGIN(qcom_mpm)
> > +IRQCHIP_MATCH("qcom,qcm2290-mpm", qcm2290_mpm_init)
> > +IRQCHIP_PLATFORM_DRIVER_END(qcom_mpm)
> > +MODULE_DESCRIPTION("Qualcomm Technologies, Inc. MSM Power Manager");
> > +MODULE_LICENSE("GPL v2");

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ