lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <576032D3.5050108@laposte.net>
Date:	Tue, 14 Jun 2016 18:37:39 +0200
From:	Sebastian Frias <sf84@...oste.net>
To:	Marc Zyngier <marc.zyngier@....com>
Cc:	Thomas Gleixner <tglx@...utronix.de>,
	LKML <linux-kernel@...r.kernel.org>,
	Grygorii Strashko <grygorii.strashko@...com>,
	Sricharan R <r.sricharan@...com>, Mason <slash.tmp@...e.fr>,
	Måns Rullgård <mans@...sr.com>
Subject: Re: Using irq-crossbar.c

Hi Marc,

On 06/13/2016 06:24 PM, Marc Zyngier wrote:
>> My understanding of "hierarchical irq domains" is that it is useful
>> when there are multiple stacked interrupt controllers. Also, the
>> documentation says "the majority of drivers should use the linear
>> map" (as opposed to the hierarchical one).
> 
> The "linear map" to be opposed to the "tree", not to the hierarchy.
> Hierarchies themselves can be built out most domain type (only the
> underlying data structure changes).

Thanks for the clarification.

> 
>> Maybe the definition of "interrupt controller" could benefit from
>> some clarification, but my understanding is that, in our case, the
>> GIC is the only interrupt controller (that's where the interrupt type
>> must be set active_high/active_low/edge, etc.), in front of it, it
>> happens to be a crossbar, that happens to be programmable, and that
>> can be used to map any 128 line into any of 24 lines of the GIC
>> (actually it is a many-to-many router, without any latching nor edge
>> detection functionality)
> 
> An interrupt controller is absolutely *anything* that is on the
> interrupt path, unless it is absolutely transparent, invisible to
> software, and doesn't require any form of configuration. 

Well, one could imagine that this many-to-many router could be pre-configured, and thus not require any form of configuration from Linux, yet, somehow the resulting mapping needs to be communicated to Linux, right?
What APIs calls should be used?

>Your own
> definition of an interrupt controller is way too restrictive.

I see, thanks for the clarification.

> 
>> Obviously, when the DT says that ethernet driver uses IRQ=120 (for
>> example), the crossbar must be setup to route line 120 to one of the
>> 24 lines going to the GIC. So a minimum of interaction between the
>> GIC driver (and/or the Linux IRQ framework) and the driver
>> programming the crossbar is required, and that's what we are trying
>> to achieve.
>>
>> Does that makes sense?
> 
> Maybe you and Mason should get together and decide what you want to
> support. Because you seem to have diverging requirements (Mason
> suggesting the exact opposite over the weekend).

IIUC what you refer to, when Mason said that we could route all 128 lines to a single GIC line he was talking about a first order approximation.

>>
>> That's the last log (it's stuck there) and I was asking how/where to enable more logs to be able to debug this.
>>
>> Or there are no standardised logs and every person has to come up
>> with its own debug logs?
> 
> We have basic tracing in the irqdomain layer, 

I followed this https://www.kernel.org/doc/local/pr_debug.txt for kernel/irq/irqdomain.c

Do you want the whole boot log? Or just the snippets from irqdomain.c and those from our driver?

>and you can then
> instrument your own driver.
> 
>>
>> In the meanwhile, and in case irq-crossbar is not the good example for our case, would it be possible to get some guidance, examples, tips, on how to write a driver like the one described? ie:
>>
>> [0..127] IRQ inputs => BIG_MUX => [0..23] outputs => [0..23] GIC inputs
>>
>> BIG_MUX is a many-to-many router:
>> - 128x32bit registers to setup a route between any of the 128 input
>> (IRQ_dev) and any of the 24 outputs (IRQ_gic)
>> -   4x32bit registers that read the RAW status of each of the 128
>> lines (no latching nor edge detection) not sure how useful is to read
>> such RAW status, because (naive hypothesis follows) Linux's IRQ
>> framework could remember which IRQ_dev lines are routed to which
>> IRQ_gic, and thus when handling IRQ_gic(x), Linux could just ask the
>> drivers tied to it to check for interruptions.
>>
> 
> OK, so this is definitely a pure router, and the lack of latch makes it
> completely unsuitable for a a cascaded interrupt controller. At least,
> we've managed to establish that this thing will never be able to handle
> more than 24 devices in a sane way. So let's forget about Mason's idea
> of cascading everything to a single output line, and let's focus on your
> initial idea of having something similar to TI's crossbar, which is a
> much saner approach.

Ok, so after discussing with some HW engineers, they said that even if this is a pure router and cannot latch by itself, since the devices themselves latch their IRQ output, reading the 4x32bit RAW status registers could work as well, that means that if there are more than 24 devices, some could share IRQs, right?

Two questions then:
a) let's say we need to share some of the IRQs, which APIs should be used?

b) some people have been asking about IRQ affinity, they have not been clear about it, but I suppose maybe they want to redistribute the IRQs.
In this case, when using IRQ sharing a device may go from sharing an IRQ line to an exclusive line or viceversa, right?
Does Linux handles that on its own or there's some API to call as well?

> 
>>
>>> Also, without seeing the code,
>>> it is pretty difficult to make any meaningful comment...
>>
>> Base code is either 4.7rc1 or 4.4.
>> The irq-crossbar code is not much different from TI, but you can find it attached.
> 
> Please post it separately (and inline), the email client I have here
> makes it hard to review attached patches.

Ok, I'll post it in a separate email and inline.

Best regards,

Sebastian

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ