lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250224131215.slcrh3czyl27zhya@thinkpad>
Date: Mon, 24 Feb 2025 18:42:15 +0530
From: Manivannan Sadhasivam <manivannan.sadhasivam@...aro.org>
To: "Havalige, Thippeswamy" <thippeswamy.havalige@....com>
Cc: "bhelgaas@...gle.com" <bhelgaas@...gle.com>,
	"lpieralisi@...nel.org" <lpieralisi@...nel.org>,
	"kw@...ux.com" <kw@...ux.com>, "robh@...nel.org" <robh@...nel.org>,
	"krzk+dt@...nel.org" <krzk+dt@...nel.org>,
	"conor+dt@...nel.org" <conor+dt@...nel.org>,
	"linux-pci@...r.kernel.org" <linux-pci@...r.kernel.org>,
	"devicetree@...r.kernel.org" <devicetree@...r.kernel.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"Simek, Michal" <michal.simek@....com>,
	"Gogada, Bharat Kumar" <bharat.kumar.gogada@....com>,
	"jingoohan1@...il.com" <jingoohan1@...il.com>
Subject: Re: [PATCH v14 3/3] PCI: amd-mdb: Add AMD MDB Root Port driver

On Mon, Feb 24, 2025 at 11:05:19AM +0000, Havalige, Thippeswamy wrote:

[...]

> > +#define AMD_MDB_TLP_IR_STATUS_MISC		0x4C0
> > +#define AMD_MDB_TLP_IR_MASK_MISC		0x4C4
> > +#define AMD_MDB_TLP_IR_ENABLE_MISC		0x4C8
> > +#define AMD_MDB_TLP_IR_DISABLE_MISC		0x4CC
> > +
> > +#define AMD_MDB_TLP_PCIE_INTX_MASK	GENMASK(23, 16)
> > +
> > +#define AMD_MDB_PCIE_INTR_INTX_ASSERT(x)	BIT((x) * 2)
> 
> How does these values correspond to the AMD_MDB_TLP_PCIE_INTX_MASK? These values could be: 0, 2, 4, and 6 corresponding to: 0b01010101? Looks wierd.

I don't know if it is your mailer issue or your workflow. Looks like my review
comments are copy pasted here. So it becomes harder to distinguish between my
previous comments and your replies.

Please fix it.

> Thank you for reviewing, Yes in register status/Enable/Disable register bits are laid in this way.
> 
> > +
> > +/* Interrupt registers definitions */
> > +#define AMD_MDB_PCIE_INTR_CMPL_TIMEOUT		15
> > +#define AMD_MDB_PCIE_INTR_INTX			16
> > +#define AMD_MDB_PCIE_INTR_PM_PME_RCVD		24
> 
> 
> > +static inline u32 pcie_read(struct amd_mdb_pcie *pcie, u32 reg) {
> > +	return readl_relaxed(pcie->slcr + reg); }
> 
> I think I already commented on these helpers. Please get rid of them. I don't see any value in this new driver. Moreover, 'inline' keywords are not required.
> Thanks for the review. While I agree to remove the 'inline', I would like pcie_read/pcie_write APIs. Could you please clarify the reason for explicitly removing pcie_read/pcie_write here?
> If I remove the pcie_read/pcie_write, it will require changes in multiple places throughout the code."

What value does the helper really add? It just wraps the {readl/writel}_relaxed
calls. Plus it hides which kind of I/O accessors are used. So I don't see a
value in keeping them.

> 
> > +
> > +static inline void pcie_write(struct amd_mdb_pcie *pcie,
> > +			      u32 val, u32 reg)
> > +{
> > +	writel_relaxed(val, pcie->slcr + reg); }
> > +
> > +static const struct irq_domain_ops amd_intx_domain_ops = {
> > +	.map = amd_mdb_pcie_intx_map,
> > +};
> > +
> > +static irqreturn_t dw_pcie_rp_intx_flow(int irq, void *args)
> 
> What does the _flow suffix mean?
> Thanks for reviewing, The _flow suffix in the function name dw_pcie_rp_intx_flow indicates that the function handles the flow or processing related to interrupt handling for the PCIe root port's INTx interrupts through generic_handle_domain_irq.
> 

(Please wrap your replies to 80 column width)

So it is the regular interrupt handler. I don't see a necessity to add the _flow
suffix.

> > +{
> > +	struct amd_mdb_pcie *pcie = args;
> > +	unsigned long val;
> > +	int i, int_status;
> > +
> > +	val = pcie_read(pcie, AMD_MDB_TLP_IR_STATUS_MISC);
> 
> You don't need port->lock here?
> Thank you for reviewing. In this case, I'm simply reading the status of the INTX register bits without modifying any registers.
> Since no shared resources are being updated or accessed concurrently, there’s no need for a lock here.
> 

What if the handler and mask/unmask functions are executed in different CPUs?
Sharing the same register without lock feels nervous. Locking principle is that
you would lock both read as well as write.

> 
> > +	int_status = FIELD_GET(AMD_MDB_TLP_PCIE_INTX_MASK, val);
> 
> You don't need to ack/clear the IRQ?
> - Thank you for reviewing, Thank you for reviewing. In this case, I am using IRQ domains, and the generic_handle_domain_irq function will invoke the necessary irq_mask and irq_unmask operations internally, which will take care of clearing the IRQ.
> 

Ok.

> > +
> > +	for (i = 0; i < PCI_NUM_INTX; i++) {
> > +		if (int_status & AMD_MDB_PCIE_INTR_INTX_ASSERT(i))
> > +			generic_handle_domain_irq(pcie->intx_domain, i);
> > +	}
> > +
> > +	return IRQ_HANDLED;
> > +}
> > +
> > +static void amd_mdb_event_irq_mask(struct irq_data *d) {
> > +	struct amd_mdb_pcie *pcie = irq_data_get_irq_chip_data(d);
> > +	struct dw_pcie *pci = &pcie->pci;
> > +	struct dw_pcie_rp *port = &pci->pp;
> > +	u32 val;
> > +
> > +	raw_spin_lock(&port->lock);
> > +	val = pcie_read(pcie, AMD_MDB_TLP_IR_STATUS_MISC);
> 
> This register is accessed in the IRQ handler also. So don't you need raw_spin_lock_irq{save/restore}? 
> - Thank you for reviewing, In handler I am just reading the status & calling this irq_mask/irq_unmask API's I don't need to have save/restore spin_lock_irq's here.
> 

Same as above.

> > +	val &= ~BIT(d->hwirq);
> > +	pcie_write(pcie, val, AMD_MDB_TLP_IR_STATUS_MISC);
> > +	raw_spin_unlock(&port->lock);
> > +}
> > +

[...]

> > +	for (i = 0; i < ARRAY_SIZE(intr_cause); i++) {
> > +		if (!intr_cause[i].str)
> > +			continue;
> > +		irq = irq_create_mapping(pcie->mdb_domain, i);
> > +		if (!irq) {
> > +			dev_err(dev, "Failed to map mdb domain interrupt\n");
> > +			return -ENOMEM;
> > +		}
> > +		err = devm_request_irq(dev, irq, amd_mdb_pcie_intr_handler,
> > +				       IRQF_SHARED | IRQF_NO_THREAD,
> > +				       intr_cause[i].sym, pcie);
> 
> Aren't these IRQs just part of a single IRQ? I'm wondering why do you need to represent them individually instead of having a single IRQ handler.
> 
> Btw, you are not disposing these IRQs anywhere. Especially in error paths.
> Thank you for reviewing. This code is based on the work authored by Marc Zyngier and Bjorn during the development of our CPM drivers, and it follows the same design principles. The individual IRQ handling is consistent with that approach.
> For reference, you can review the driver here: pcie-xilinx-cpm.c. All of your comments have been incorporated into this driver as requested.
> 

Ok for the separate IRQ question. But you still need to dispose the IRQs in
error path.

> > +		if (err) {
> > +			dev_err(dev, "Failed to request IRQ %d\n", irq);
> > +			return err;
> > +		}
> > +	}
> > +
> > +	pcie->intx_irq = irq_create_mapping(pcie->mdb_domain,
> > +					    AMD_MDB_PCIE_INTR_INTX);
> > +	if (!pcie->intx_irq) {
> > +		dev_err(dev, "Failed to map INTx interrupt\n");
> > +		return -ENXIO;
> > +	}
> > +
> > +	err = devm_request_irq(dev, pcie->intx_irq,
> > +			       dw_pcie_rp_intx_flow,
> > +			       IRQF_SHARED | IRQF_NO_THREAD, NULL, pcie);
> > +	if (err) {
> > +		dev_err(dev, "Failed to request INTx IRQ %d\n", irq);
> > +		return err;
> > +	}
> > +
> > +	/* Plug the main event handler */
> > +	err = devm_request_irq(dev, pp->irq, amd_mdb_pcie_event_flow,
> > +			       IRQF_SHARED | IRQF_NO_THREAD, "amd_mdb pcie_irq",
> 
> Why is this a SHARED IRQ?
> Thank you for reviewing. The IRQ is shared because all the events, errors, and INTx interrupts are routed through the same IRQ line, so multiple handlers need to be able to respond to the same interrupt.

IIUC, you have a single handler for this IRQ and that handler is invoking other
handlers (for events, INTx etc...). So I don't see how this IRQ is shared.

Shared IRQ is only required if multiple handlers are sharing the same IRQ line.
But that is not the case here afaics.

- Mani

-- 
மணிவண்ணன் சதாசிவம்

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ