lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <dd246f88-0e27-b27e-fc42-6e193a91da3e@caviumnetworks.com>
Date:   Thu, 12 Jan 2017 14:35:58 -0800
From:   David Daney <ddaney@...iumnetworks.com>
To:     Thomas Gleixner <tglx@...utronix.de>
CC:     Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        Linus Walleij <linus.walleij@...aro.org>
Subject: irq domain hierarchy vs. chaining w/ PCI MSI-X...

Hi Thomas,

I am trying to figure out how to handle this situation:

                   handle_level_irq()
                   +---------------+                 handle_fasteoi_irq()
                   | PCIe hosted   |                 +-----------+ 
+-----+
  --level_gpio---->| GPIO to MSI-X |--MSI_message--+>| gicv3-ITS |---> | 
CPU |
                   | widget        |               | +-----------+ 
+-----+
                   +---------------+               |
                                                   |
           +-------------------+                   |
           | other PCIe device |---MSI_message-----+
           +-------------------+


The question is how to structure the interrupt handling.  My initial
attempt was a chaining arrangement where the GPIO driver does
request_irq() for the appropriate MSI-X vector, and the handler calls
back into the irq system like this:


static irqreturn_t thunderx_gpio_chain_handler(int irq, void *dev)
{
	struct thunderx_irqdev *irqdev = dev;
	int chained_irq;
	int ret;

	chained_irq = irq_find_mapping(irqdev->gpio->chip.irqdomain,
				       irqdev->line);
	if (!chained_irq)
		return IRQ_NONE;

	ret = generic_handle_irq(chained_irq);

	return ret ? IRQ_NONE : IRQ_HANDLED;
}

Thus getting the proper GPIO irq_chip functions called to manage the
level triggering semantics.

The drawbacks of this approach are that there are then two irqs
associated with the GPIO line (the base MSI-X and the chained GPIO),
also there can be up to 80-100 of these widgets, so potentially we can
consume twice that many irq numbers.

It was suggested by Linus Walleij that using an irq domain hierarchy
might be a better idea.  However, I cannot figure out how this might
work.  The gicv3-ITS needs to use handle_fasteoi_irq(), and we need
handle_level_irq() for the GPIO-level lines.  Getting the proper
irq_chip functions called in a hierarchical configuration doesn't seem
doable given the heterogeneous flow handlers.

Can you think of a better way of structuring this than chaining from the 
MSI-X handler as I outlined above?

Thanks in advance for any insight,
David Daney

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ