lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <57695D67.1000301@laposte.net>
Date:	Tue, 21 Jun 2016 17:29:43 +0200
From:	Sebastian Frias <sf84@...oste.net>
To:	Marc Zyngier <marc.zyngier@....com>
Cc:	Thomas Gleixner <tglx@...utronix.de>,
	LKML <linux-kernel@...r.kernel.org>,
	Grygorii Strashko <grygorii.strashko@...com>,
	Mason <slash.tmp@...e.fr>,
	Måns Rullgård <mans@...sr.com>
Subject: Re: Using irq-crossbar.c

Hi Marc,

On 06/21/2016 02:41 PM, Marc Zyngier wrote:
>> Ok, so after discussing with some HW engineers, they said that even
>> if this is a pure router and cannot latch by itself, since the
>> devices themselves latch their IRQ output, reading the 4x32bit RAW
>> status registers could work as well, that means that if there are
>> more than 24 devices, some could share IRQs, right?
> 
> As mentioned earlier, this only works for level interrupts. If you
> enforce this, this is OK. I assume that you also have a way to mask
> these interrupts, right?

Yes, that's what the bit 31 does (see tangox_setup_irq_route() function in the code I sent).

For reference, here's the documentation of the mux:

----
CPU block interrupt interface is now 32bits.
The 24 first interrupt bits are generated from the system interrupts and the 8 msb interrupts are cpu local interrupts :

    IRQs [23:0] tango system irqs.
    IRQs [27:24] CPU core cross trigger interface interrupt (1 per core).
    IRQs [31:28] CPU core PMU (performance unit) interrupt (1 per core). 

The 24 lsb interrupts are generated through a new interrupt map module that maps the tango 128 interrupts to those 24 interrupts.
For each of the 128 input system interrupt, one register is dedicated to program the destination interrupt among the 24 available.
The mapper is configured as follows, starting at address (0x6f800) :

offset name            description
0x000  irq_in_0_cfg    "en"=bit[31]; "inv"=bit[16]; "dest"=bits[4:0]
0x004  irq_in_1_cfg    "en"=bit[31]; "inv"=bit[16]; "dest"=bits[4:0]
.
.
.
0x1FC  irq_in_127_cfg  "en"=bit[31]; "inv"=bit[16]; "dest"=bits[4:0]
0x200  soft_irq_cfg    "enable"=bits[15:0]
0x204  soft_irq_map0   "map3"=bits[28:24]; "map2"=bits[20:16]; "map1"=bits[12:8]; "map0"=bits[4:0]
0x208  soft_irq_map1   "map3"=bits[28:24]; "map2"=bits[20:16]; "map1"=bits[12:8]; "map0"=bits[4:0]
0x20C  soft_irq_map2   "map3"=bits[28:24]; "map2"=bits[20:16]; "map1"=bits[12:8]; "map0"=bits[4:0]
0x210  soft_irq_map3   "map3"=bits[28:24]; "map2"=bits[20:16]; "map1"=bits[12:8]; "map0"=bits[4:0]
0x214  soft_irq_set    "set"=bits[15:0]
0x218  soft_irq_clear  "clear"=bits[15:0]
0x21C  read_cpu_irq    "cpu_block_irq"=bits[23:0]
0x220  read_sys_irq0   "system_irq"=bits[31:0]; (irqs: 0->31)
0x224  read_sys_irq1   "system_irq"=bits[31:0]; (irqs: 32->63)
0x228  read_sys_irq2   "system_irq"=bits[31:0]; (irqs: 64->95)
0x22C  read_sys_irq3   "system_irq"=bits[31:0]; (irqs: 96->127)

irq_in_N_cfg   : input N mapping :
- dest bits[4:0]    => set destination interrupt among the 24 output interrupts. (if multiple inputs are mapped to the same output, result is an OR of the inputs).
- inv bit[16]       => if set, inverts input interrupt polarity (active at 0).
- en bit[31]        => enable interrupt. Acts like a mask on the input interrupt. 
soft_irq       : this module supports up to 16 software interrupts.
- enable bits[15:0] => enable usage of software IRQs (SIRQ), 1 bit per SIRQ. 
soft_irq_mapN  : For each of the 16 soft IRQ (SIRQ), map them in out IRQ[23:0] vector.
- mapN              => 5 bits to select where to connect the SIRQ among the 23 bits output IRQ. (if multiple SIRQ are mapped to the same output IRQ, result is an OR of those signals). 
soft_irq_set   : 16bits, write 1 bit at one set the corresponding SIRQ. Read returns the software SIRQ vector value.
soft_irq_clear : 16bits, write 1 bit at one clear the corresponding software SIRQ. Read returns the software SIRQ vector value.
read_cpu_irq   : 24bits, returns output IRQ value (IRQs connected to the ARM cluster).
read_sys_irqN  : 32bits, returns input system IRQ value before mapping. 


> 
>> Two questions then:
>> a) let's say we need to share some of the IRQs, which APIs should be used?
> 
> The usual
> irq_set_chained_handler_and_data()/chained_irq_enter()/chained_irq_exit().

Ok, thanks.

At first I thought that I could modify tangox_allocate_gic_irq() so that when running out of IRQ lines going to the GIC, it:
- reserved the last line of the GIC with:

irq = fwspec.param[1];
err = irq_domain_alloc_irqs_parent(domain, virq, 1, &fwspec);

- then, it could add an irqchip to the domain with:

err = irq_alloc_domain_generic_chips(domain, 128, 1, node->name, handle_level_irq, 0, 0, 0);
for (i = 0; i < 4; i++) {
   gc = irq_get_domain_generic_chip(domain, i * 32);
   tangox_irq_init_chip(gc, i * IRQ_CTL_HI, i * EDGE_CTL_HI);
}

irq_set_chained_handler(irq, tangox_irq_handler);
irq_set_handler_data(irq, domain);

Not sure if that makes sense (it's hard to get a clear understanding of all these APIs and their possible interactions)

> 
>> b) some people have been asking about IRQ affinity, they have not
>> been clear about it, but I suppose maybe they want to redistribute
>> the IRQs. In this case, when using IRQ sharing a device may go from
>> sharing an IRQ line to an exclusive line or viceversa, right? Does
>> Linux handles that on its own or there's some API to call as well?
> 
> You need to implement the .irq_set_affinity in the irqchip. But it
> hardly makes sense in your particular case:
> - if you're using it as a pure router (no multiplexing), the affinity is
> controlled by the the GIC itself (i.e nothing to do here, except
> forwarding the request to the underlying irqchip).

Ok, that confirms my guess from seeing that the .irq_set_affinity callback is set to a framework callback ('irq_chip_set_affinity_parent') in irq-crossbar.c

> - If you're using it as a chained irqchip, then you can't easily migrate
> a bunch of interrupt, because this is not what userspace expects.
> 

Ok, so IIUC, when chaining irqchips (like irq-tango.c) it's hard to migrate the interrupts.

> What you could do would be to dedicate an irq line per CPU, and
> reconfigure the mux when changing the affinity.

Ok, so basically, we'd use only N lines of the GIC (N=number of CPUs, instead of the 24 available) and multiplex the M>N interrupt lines into the N used inputs of the GIC, right?
I think this is similar to what Mason had proposed as "first order approximation".

> 
>> About a) I did not find any driver that uses irq_domain_add_linear()
>> and irq_domain_add_hierarchy() but maybe I'm not approaching the
>> problem from the right angle.
> 
> A chained interrupt controller cannot be hierarchical. That's pretty
> fundamental.

Oh, sorry I must have misunderstood the terminology.
I mean, at the end of IRQ-domain.txt it says:

"3) Optionally implement an irq_chip to manage the interrupt controller
   hardware."

which I understood as calls to irq_alloc_domain_generic_chips() with a domain allocated with irq_domain_add_linear()

> 
> Overall, I think you need to settle for a use case (either pure router
> or chained controller) and implement that. To support both, you'll need
> two different implementations (basically two drivers in one).
> 

Ok thanks for the advise.
The pure router, while already implemented, seems to have the issue of not being able to handle more than 24 interrupts, and I'm not sure how to handle the sharing. Also I don't see how would the sharing be described in the DT.

With the old driver (irq-tango.c) I can see how the sharing is accomplished, because there were several calls to irq_domain_add_linear/irq_alloc_domain_generic_chips/irq_set_chained_handler/irq_set_handler_data (one for each IRQ line going to the GIC) and then the DT for the device would specify an interrupt-parent + interrupts keys, but I don't see how to do that with the domain_hierarchy or how to describe that on DT.

Best regards,

Sebastian




Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ