[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <BYAPR02MB5559496A131BB88E98D9C3C3A5D80@BYAPR02MB5559.namprd02.prod.outlook.com>
Date: Thu, 16 Apr 2020 07:07:28 +0000
From: Bharat Kumar Gogada <bharatku@...inx.com>
To: "lorenzo.pieralisi@....com" <lorenzo.pieralisi@....com>,
"maz@...nel.org" <maz@...nel.org>
CC: "linux-pci@...r.kernel.org" <linux-pci@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"bhelgaas@...gle.com" <bhelgaas@...gle.com>,
Ravikiran Gummaluri <rgummal@...inx.com>
Subject: RE: [PATCH v5 2/2] PCI: xilinx-cpm: Add Versal CPM Root Port driver
> Subject: RE: [PATCH v5 2/2] PCI: xilinx-cpm: Add Versal CPM Root Port driver
>
> > Subject: RE: [PATCH v5 2/2] PCI: xilinx-cpm: Add Versal CPM Root Port
> > driver
> >
> > > Subject: Re: [PATCH v5 2/2] PCI: xilinx-cpm: Add Versal CPM Root
> > > Port driver
> > >
> > > [+MarcZ, FHI]
> > >
> > > On Tue, Feb 25, 2020 at 02:39:56PM +0000, Bharat Kumar Gogada wrote:
> > >
> > > [...]
> > >
> > > > > > +/* ECAM definitions */
> > > > > > +#define ECAM_BUS_NUM_SHIFT 20
> > > > > > +#define ECAM_DEV_NUM_SHIFT 12
> > > > >
> > > > > You don't need these ECAM_* defines, you can use
> pci_generic_ecam_ops.
> > > > Does this need separate ranges region for ECAM space ?
> > > > We have ECAM and controller space in same region.
> > >
> > > You can create an ECAM window with pci_ecam_create where *cfgres
> > > represent the ECAM area, I don't get what you mean by "same region".
> > >
> > > Do you mean "contiguous" ? Or something else ?
> > >
> > > > > > +
> > > > > > +/**
> > > > > > + * struct xilinx_cpm_pcie_port - PCIe port information
> > > > > > + * @reg_base: Bridge Register Base
> > > > > > + * @cpm_base: CPM System Level Control and Status
> > > > > > +Register(SLCR) Base
> > > > > > + * @irq: Interrupt number
> > > > > > + * @root_busno: Root Bus number
> > > > > > + * @dev: Device pointer
> > > > > > + * @leg_domain: Legacy IRQ domain pointer
> > > > > > + * @irq_misc: Legacy and error interrupt number */ struct
> > > > > > +xilinx_cpm_pcie_port {
> > > > > > + void __iomem *reg_base;
> > > > > > + void __iomem *cpm_base;
> > > > > > + u32 irq;
> > > > > > + u8 root_busno;
> > > > > > + struct device *dev;
> > > > > > + struct irq_domain *leg_domain;
> > > > > > + int irq_misc;
> > > > > > +};
> > > > > > +
> > > > > > +static inline u32 pcie_read(struct xilinx_cpm_pcie_port
> > > > > > +*port,
> > > > > > +u32
> > > > > > +reg) {
> > > > > > + return readl(port->reg_base + reg); }
> > > > > > +
> > > > > > +static inline void pcie_write(struct xilinx_cpm_pcie_port *port,
> > > > > > + u32 val, u32 reg)
> > > > > > +{
> > > > > > + writel(val, port->reg_base + reg); }
> > > > > > +
> > > > > > +static inline bool cpm_pcie_link_up(struct
> > > > > > +xilinx_cpm_pcie_port
> > > > > > +*port) {
> > > > > > + return (pcie_read(port, XILINX_CPM_PCIE_REG_PSCR) &
> > > > > > + XILINX_CPM_PCIE_REG_PSCR_LNKUP) ? 1 : 0;
> > > > >
> > > > > u32 val = pcie_read(port, XILINX_CPM_PCIE_REG_PSCR);
> > > > >
> > > > > return val & XILINX_CPM_PCIE_REG_PSCR_LNKUP;
> > > > >
> > > > > And this function call is not that informative anyway - it is
> > > > > used just to print a log whose usefulness is questionable.
> > > > We need this logging information customers are using this info in
> > > > case of link down failure.
> > >
> > > Out of curiosity, to do what ?
> > >
> > > [...]
> > >
> > > > > > +/**
> > > > > > + * xilinx_cpm_pcie_intx_map - Set the handler for the INTx
> > > > > > +and mark IRQ as valid
> > > > > > + * @domain: IRQ domain
> > > > > > + * @irq: Virtual IRQ number
> > > > > > + * @hwirq: HW interrupt number
> > > > > > + *
> > > > > > + * Return: Always returns 0.
> > > > > > + */
> > > > > > +static int xilinx_cpm_pcie_intx_map(struct irq_domain *domain,
> > > > > > + unsigned int irq, irq_hw_number_t
> > hwirq) {
> > > > > > + irq_set_chip_and_handler(irq, &dummy_irq_chip,
> > > > > > +handle_simple_irq);
> > > > >
> > > > > INTX are level IRQs, the flow handler must be handle_level_irq.
> > > > Accepted will change.
> > > > >
> > > > > > + irq_set_chip_data(irq, domain->host_data);
> > > > > > + irq_set_status_flags(irq, IRQ_LEVEL);
> > > > >
> > > > > The way INTX are handled in this patch is wrong. You must set-up
> > > > > a chained IRQ with the appropriate flow handler, current code
> > > > > uses an IRQ action and that's an IRQ layer violation and it goes
> > > > > without saying that it
> > > is almost certainly broken.
> > > > In our controller we use same irq line for controller errors and
> > > > legacy errors. we have two cases here where error interrupts are
> > > > self-consumed by controller, and legacy interrupts are flow handled.
> > > > Its not INTX handling alone for this IRQ line . So chained IRQ
> > > > can be used for self consumed interrupts too ?
> > >
> > > No. In this specific case both solutions are not satisfying, we need
> > > to give it some thought, I will talk to Marc (CC'ed) to find the
> > > best option here going forward.
> > >
> > Hi Marc,
> >
> > Can you please provide yours inputs for this case.
> >
> Hi Marc,
>
> Can you please provide required inputs on this.
>
HI Lorenzo,
Since Marc hasn't responded, do you have any inputs on this ?
Shall I proceed with other comments of yours ?
Regards,
Bharat
Powered by blists - more mailing lists