lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YrOjQ3IkGZpe1lpN@iweiny-desk3>
Date:   Wed, 22 Jun 2022 16:18:27 -0700
From:   Ira Weiny <ira.weiny@...el.com>
To:     Dan Williams <dan.j.williams@...el.com>
CC:     Bjorn Helgaas <bhelgaas@...gle.com>,
        Jonathan Cameron <Jonathan.Cameron@...wei.com>,
        Ben Widawsky <bwidawsk@...nel.org>,
        "Alison Schofield" <alison.schofield@...el.com>,
        Vishal Verma <vishal.l.verma@...el.com>,
        Dave Jiang <dave.jiang@...el.com>,
        <linux-kernel@...r.kernel.org>, <linux-cxl@...r.kernel.org>,
        <linux-pci@...r.kernel.org>
Subject: Re: [PATCH V11 4/8] cxl/pci: Create PCI DOE mailbox's for memory
 devices

On Tue, Jun 21, 2022 at 11:29:35AM -0700, Ira wrote:
> On Fri, Jun 17, 2022 at 04:44:27PM -0700, Dan Williams wrote:
> > ira.weiny@ wrote:
> 
> [snip]
> 
> > > diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h
> > > index 60d10ee1e7fc..4d2764b865ab 100644
> > > --- a/drivers/cxl/cxlmem.h
> > > +++ b/drivers/cxl/cxlmem.h
> > > @@ -191,6 +191,8 @@ struct cxl_endpoint_dvsec_info {
> > >   * @component_reg_phys: register base of component registers
> > >   * @info: Cached DVSEC information about the device.
> > >   * @serial: PCIe Device Serial Number
> > > + * @doe_mbs: PCI DOE mailbox array
> > > + * @num_mbs: Number of DOE mailboxes
> > >   * @mbox_send: @dev specific transport for transmitting mailbox commands
> > >   *
> > >   * See section 8.2.9.5.2 Capacity Configuration and Label Storage for
> > > @@ -224,6 +226,10 @@ struct cxl_dev_state {
> > >  	resource_size_t component_reg_phys;
> > >  	u64 serial;
> > >  
> > > +	bool doe_use_irq;
> > 
> > Don't pass temporary state through a long lived data structure. Just
> > pass flag by reference between the functions that want to coordinate
> > this.
> 
> Done.
> 
> [snip]
> 
> > > +
> > > +static void cxl_alloc_irq_vectors(struct cxl_dev_state *cxlds)
> > > +{
> > > +	struct device *dev = cxlds->dev;
> > > +	struct pci_dev *pdev = to_pci_dev(dev);
> > > +	int max_irqs = 0;
> > > +	int off = 0;
> > > +	int rc;
> > > +
> > > +	/* Account for all the DOE vectors needed */
> > > +	pci_doe_for_each_off(pdev, off) {
> > > +		int irq = pci_doe_get_irq_num(pdev, off);
> > > +
> > > +		if (irq < 0)
> > > +			continue;
> > > +		max_irqs = max(max_irqs, irq + 1);
> > 
> > This seems to assume that different DOEs will get independent vectors.
> > The driver needs to be prepared for DOE instances, Event notifications,
> > and mailbox commands to share a single MSI vector in the worst case.
> > Lets focus on polled mode DOE, or explicitly only support interrupt
> > based operation when no vector sharing is detected.
> > 
> 
> Ok I see now.  I was under the impression they had to be unique.
> 
> Do you think it is sufficient to check in this loop for duplicates and bail if
> any are shared?

I'm still removing the irq code from the CXL layer but I had to look a bit
deeper at this for my own knowledge.

I don't think shared interrupt numbers is a problem because the
pci_request_irq() used within pci_doe_create_mb() specifies IRQF_SHARED.

drivers/pci/irq.c:

int pci_request_irq(struct pci_dev *dev, unsigned int nr, irq_handler_t handler,
                irq_handler_t thread_fn, void *dev_id, const char *fmt, ...)
{
...
        unsigned long irqflags = IRQF_SHARED;
...

So I think this would work even with share vectors, right?

Regardless, setting up the CXL/PCI IRQs is a bit of a mess.  So I'm still going
to remove the IRQ code in the CXL layer.  But I think it is safe to leave the
IRQ code in the pci/doe.c layer for others to use.

Ira

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ