[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20150518135348.GA14810@peter-bsd.cuba.int>
Date: Mon, 18 May 2015 16:53:48 +0300
From: "'p.kosyh@...il.com'" <p.kosyh@...il.com>
To: David Laight <David.Laight@...LAB.COM>
Cc: "netdev@...r.kernel.org" <netdev@...r.kernel.org>
Subject: Re: __assign_irq_vector (x86) and irq vectors exhaust
Yes, the allocation of vectors during probing is not a problem. The
problem is described in bottom part of my message.
Problem is that we allocates cpu after cpu. One by pne.
For example, we have 200 irqs and irq domain is 0xffffffff (no numa, or
1 numa node and 32 cpus). While probing devices cpu0 will got all 200 irq slots.
The better solution is that we randomly (or round robin) fill all cpus.
Sorry for mu ugly english.
> > For example, we have a 32 cpu system with lot of 10Gb cards (each of
> > them has 32 msi-x irqs). Even if card is not used, it allocates an irq
> > vector after probing (pci_enable_msix()). We have about ~200 vectors limit
> > per cpu (on x86), and __assign_irq_vector allocates them filling cpus one
> > by one (see at cpumask_first_and()):
> ...
>
> It might help if the kernel APIs allowed a driver to request additional
> MSI-X interrupts after probe time.
>
> If a device supports 32 interrupts the driver can say that it only
> needs (say) interrupts 0, 1 and 16 (and only these MSIX table slots
> get filled with interrupt 'info') - but can't later allocate the
> MSIX info for other interrupts.
>
> I can't see anything in the MSIX spec that stops things working
> that way.
>
> David
>
--
Peter Kosyh
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists