[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <959997ee-f393-bab0-45c0-4144c37b9185@redhat.com>
Date: Mon, 26 Oct 2020 18:22:29 -0400
From: Nitesh Narayan Lal <nitesh@...hat.com>
To: Thomas Gleixner <tglx@...utronix.de>,
Jacob Keller <jacob.e.keller@...el.com>,
Marcelo Tosatti <mtosatti@...hat.com>
Cc: Peter Zijlstra <peterz@...radead.org>, helgaas@...nel.org,
linux-kernel@...r.kernel.org, netdev@...r.kernel.org,
linux-pci@...r.kernel.org, intel-wired-lan@...ts.osuosl.org,
frederic@...nel.org, sassmann@...hat.com,
jesse.brandeburg@...el.com, lihong.yang@...el.com,
jeffrey.t.kirsher@...el.com, jlelli@...hat.com, hch@...radead.org,
bhelgaas@...gle.com, mike.marciniszyn@...el.com,
dennis.dalessandro@...el.com, thomas.lendacky@....com,
jiri@...dia.com, mingo@...hat.com, juri.lelli@...hat.com,
vincent.guittot@...aro.org, lgoncalv@...hat.com,
Jakub Kicinski <kuba@...nel.org>
Subject: Re: [PATCH v4 4/4] PCI: Limit pci_alloc_irq_vectors() to housekeeping
CPUs
On 10/26/20 5:50 PM, Thomas Gleixner wrote:
> On Mon, Oct 26 2020 at 14:11, Jacob Keller wrote:
>> On 10/26/2020 1:11 PM, Thomas Gleixner wrote:
>>> On Mon, Oct 26 2020 at 12:21, Jacob Keller wrote:
>>>> Are there drivers which use more than one interrupt per queue? I know
>>>> drivers have multiple management interrupts.. and I guess some drivers
>>>> do combined 1 interrupt per pair of Tx/Rx.. It's also plausible to to
>>>> have multiple queues for one interrupt .. I'm not sure how a single
>>>> queue with multiple interrupts would work though.
>>> For block there is always one interrupt per queue. Some Network drivers
>>> seem to have seperate RX and TX interrupts per queue.
>> That's true when thinking of Tx and Rx as a single queue. Another way to
>> think about it is "one rx queue" and "one tx queue" each with their own
>> interrupt...
>>
>> Even if there are devices which force there to be exactly queue pairs,
>> you could still think of them as separate entities?
> Interesting thought.
>
> But as Jakub explained networking queues are fundamentally different
> from block queues on the RX side. For block the request issued on queue
> X will raise the complete interrupt on queue X.
>
> For networking the TX side will raise the TX interrupt on the queue on
> which the packet was queued obviously or should I say hopefully. :)
This is my impression as well.
> But incoming packets will be directed to some receive queue based on a
> hash or whatever crystallball logic the firmware decided to implement.
>
> Which makes this not really suitable for the managed interrupt and
> spreading approach which is used by block-mq. Hrm...
>
> But I still think that for curing that isolation stuff we want at least
> some information from the driver. Alternative solution would be to grant
> the allocation of interrupts and queues and have some sysfs knob to shut
> down queues at runtime. If that shutdown results in releasing the queue
> interrupt (via free_irq()) then the vector exhaustion problem goes away.
I think this is close to what I and Marcelo were discussing earlier today
privately.
I don't think there is currently a way to control the enablement/disablement of
interrupts from the userspace.
I think in terms of the idea we need something similar to what i40e does,
that is shutdown all IRQs when CPU is suspended and restores the interrupt
schema when the CPU is back online.
The two key difference will be that this API needs to be generic and also
needs to be exposed to the userspace through something like sysfs as you
have mentioned.
--
Thanks
Nitesh
Download attachment "signature.asc" of type "application/pgp-signature" (834 bytes)
Powered by blists - more mailing lists