lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 21 Jun 2016 13:41:11 +0100
From:	Marc Zyngier <marc.zyngier@....com>
To:	Sebastian Frias <sf84@...oste.net>
Cc:	Thomas Gleixner <tglx@...utronix.de>,
	LKML <linux-kernel@...r.kernel.org>,
	Grygorii Strashko <grygorii.strashko@...com>,
	Mason <slash.tmp@...e.fr>,
	Måns Rullgård <mans@...sr.com>
Subject: Re: Using irq-crossbar.c

On 21/06/16 12:03, Sebastian Frias wrote:
> Hi Marc,
> 
> On 06/21/2016 12:18 PM, Marc Zyngier wrote:
>>> Since irq-tango_v2.c is similar to irq-crossbar.c from TI (since it
>>> is based on it), I was wondering what is the policy or recommendation
>>> in such cases?
>>> Should I attempt to merge the code (mainly the way to set up the
>>> registers) on irq-crossbar.c or should we consider irq-tango_v2.c to
>>> live its own life?
>>
>> If the HW is significantly different, I'd rather have a separate driver.
>> We can always share some things later on by having a small library of
>> "stuff".
> 
> I'd say it is very similar. Most of the changes I did were done to understand how it worked.
> However, it may end up being different if we use cascaded interrupts.
> 
>>
>>> NOTE: IMHO, irq-crossbar.c could benefit from clearer names for some
>>> DT properties, because "max_irqs" and "max-crossbar-sources" are not
>>> straight forward names for "mux_outputs" and "mux_inputs"
>>> (respectively)
>>
>> Maybe, but this ship has sailed a long time ago. This is an ABI now, and
>> it is not going to change unless proven to be broken. On the other hand,
>> you can name your own properties as you see fit.
> 
> Ok.
> 
>>
>>> NOTE2: current irq-tango_v2.c code still has a 24 IRQ limitation
>>> since it is not using any API that would allow it to setup IRQ
>>> sharing.
>>
>> Unless you limit your mux level interrupts only, I cannot see how you
>> could deal with cascaded interrupts. By the time you receive an edge,
>> the line will have dropped, and you won't be able to identify the source
>> interrupt.
> 
> Yes, cascaded interrupts would be limited to level only.
> 
> By the way, did you see my other questions? (copy/pasted here for convenience):
> 
> ----
> Ok, so after discussing with some HW engineers, they said that even
> if this is a pure router and cannot latch by itself, since the
> devices themselves latch their IRQ output, reading the 4x32bit RAW
> status registers could work as well, that means that if there are
> more than 24 devices, some could share IRQs, right?

As mentioned earlier, this only works for level interrupts. If you
enforce this, this is OK. I assume that you also have a way to mask
these interrupts, right?

> Two questions then:
> a) let's say we need to share some of the IRQs, which APIs should be used?

The usual
irq_set_chained_handler_and_data()/chained_irq_enter()/chained_irq_exit().

> b) some people have been asking about IRQ affinity, they have not
> been clear about it, but I suppose maybe they want to redistribute
> the IRQs. In this case, when using IRQ sharing a device may go from
> sharing an IRQ line to an exclusive line or viceversa, right? Does
> Linux handles that on its own or there's some API to call as well?

You need to implement the .irq_set_affinity in the irqchip. But it
hardly makes sense in your particular case:
- if you're using it as a pure router (no multiplexing), the affinity is
controlled by the the GIC itself (i.e nothing to do here, except
forwarding the request to the underlying irqchip).
- If you're using it as a chained irqchip, then you can't easily migrate
a bunch of interrupt, because this is not what userspace expects.

What you could do would be to dedicate an irq line per CPU, and
reconfigure the mux when changing the affinity.

> About a) I did not find any driver that uses irq_domain_add_linear()
> and irq_domain_add_hierarchy() but maybe I'm not approaching the
> problem from the right angle.

A chained interrupt controller cannot be hierarchical. That's pretty
fundamental.

Overall, I think you need to settle for a use case (either pure router
or chained controller) and implement that. To support both, you'll need
two different implementations (basically two drivers in one).

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ