[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2db49a5f-5846-206c-d2bc-cfac96725946@caviumnetworks.com>
Date: Fri, 13 Jan 2017 09:37:53 -0800
From: David Daney <ddaney@...iumnetworks.com>
To: Marc Zyngier <marc.zyngier@....com>,
Thomas Gleixner <tglx@...utronix.de>
CC: Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Linus Walleij <linus.walleij@...aro.org>
Subject: Re: irq domain hierarchy vs. chaining w/ PCI MSI-X...
On 01/13/2017 08:15 AM, Marc Zyngier wrote:
> Thanks Linus for looping me in.
>
> On 12/01/17 22:35, David Daney wrote:
>> Hi Thomas,
>>
>> I am trying to figure out how to handle this situation:
>>
>> handle_level_irq()
>> +---------------+ handle_fasteoi_irq()
>> | PCIe hosted | +-----------+
>> +-----+
>> --level_gpio---->| GPIO to MSI-X |--MSI_message--+>| gicv3-ITS |---> |
>> CPU |
>> | widget | | +-----------+
>> +-----+
>> +---------------+ |
>> |
>> +-------------------+ |
>> | other PCIe device |---MSI_message-----+
>> +-------------------+
>>
>>
>> The question is how to structure the interrupt handling. My initial
>> attempt was a chaining arrangement where the GPIO driver does
>> request_irq() for the appropriate MSI-X vector, and the handler calls
>> back into the irq system like this:
>>
>>
>> static irqreturn_t thunderx_gpio_chain_handler(int irq, void *dev)
>> {
>> struct thunderx_irqdev *irqdev = dev;
>> int chained_irq;
>> int ret;
>>
>> chained_irq = irq_find_mapping(irqdev->gpio->chip.irqdomain,
>> irqdev->line);
>> if (!chained_irq)
>> return IRQ_NONE;
>>
>> ret = generic_handle_irq(chained_irq);
>>
>> return ret ? IRQ_NONE : IRQ_HANDLED;
>> }
>>
>> Thus getting the proper GPIO irq_chip functions called to manage the
>> level triggering semantics.
>>
>> The drawbacks of this approach are that there are then two irqs
>> associated with the GPIO line (the base MSI-X and the chained GPIO),
>> also there can be up to 80-100 of these widgets, so potentially we can
>> consume twice that many irq numbers.
>>
>> It was suggested by Linus Walleij that using an irq domain hierarchy
>> might be a better idea. However, I cannot figure out how this might
>> work. The gicv3-ITS needs to use handle_fasteoi_irq(), and we need
>> handle_level_irq() for the GPIO-level lines. Getting the proper
>> irq_chip functions called in a hierarchical configuration doesn't seem
>> doable given the heterogeneous flow handlers.
>
> My main worry here is that you're trying to handle something that is
> apparently a level interrupt using a transport that only handles edge
> interrupts. It means that each time you EOI an interrupt, you need to be
> able to resample the level and retrigger it if still high. Do you have a
> HW facility to do so? Or do you emulate this in SW?
Yes.
The first thing the handle_level_irq() flow handler does is to mask the
source. After calling the handler, it then unmasks the source.
The act of unmasking in the HW causes another MSI to be emitted if the
level signal is still active. This is what we want as it ensures that
interrupts keep firing as long as the signal is active, even though the
underlying mechanism uses edge semantics MSI.
>
>> Can you think of a better way of structuring this than chaining from the
>> MSI-X handler as I outlined above?
>
> We already have similar horrors - see irq-mbigen.c which does exactly
> that. It creates IRQ domains spanning a bunch of MSIs allocated to that
> platform device. Once you have that, the rest is pretty easy.
>
> In your case, it probably requires adding the same kind of surgery so
> that we can create an IRQ domain from the MSIs associated with a PCIe
> device. Not too hard, just not there yet, and we can probably reuse some
> of the code that lives in platform-msi.c
That seems too ugly.
I would propose one of the following:
1) Just keep the crappy chaining more or less as I originally
implemented it, as it works well enough to get useful work done.
2) add an optional hierarchical flow handler function to struct
irq_data. If populated, this flow handler would be called from
handle_fasteoi_irq() instead of calling handle_irq_event(). This could
allow each irq_chip to have its own flow handler, instead of a single
flow handler shared by each of the hierarchically nested irq_chip.
David Daney
>
> Thanks,
>
> M.
>
Powered by blists - more mailing lists