lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 11 Apr 2017 18:26:23 +0200
From:   Mason <slash.tmp@...e.fr>
To:     Marc Zyngier <marc.zyngier@....com>,
        Thomas Gleixner <tglx@...utronix.de>
Cc:     Bjorn Helgaas <helgaas@...nel.org>,
        Robin Murphy <robin.murphy@....com>,
        Lorenzo Pieralisi <lorenzo.pieralisi@....com>,
        Liviu Dudau <liviu.dudau@....com>,
        David Laight <david.laight@...lab.com>,
        linux-pci <linux-pci@...r.kernel.org>,
        Linux ARM <linux-arm-kernel@...ts.infradead.org>,
        Thibaud Cornic <thibaud_cornic@...madesigns.com>,
        Phuong Nguyen <phuong_nguyen@...madesigns.com>,
        LKML <linux-kernel@...r.kernel.org>
Subject: Re: [RFC PATCH v0.2] PCI: Add support for tango PCIe host bridge

On 11/04/2017 17:49, Marc Zyngier wrote:
> On 11/04/17 16:13, Mason wrote:
>> On 27/03/2017 19:09, Marc Zyngier wrote:
>>
>>> Here's what your system looks like:
>>>
>>> PCI-EP -------> MSI Controller ------> INTC
>>>          MSI                    IRQ
>>>
>>> A PCI MSI is always edge. No ifs, no buts. That's what it is, and nothing
>>> else. Now, your MSI controller signals its output using a level interrupt,
>>> since you need to whack it on the head so that it lowers its line.
>>>
>>> There is not a single trigger, because there is not a single interrupt.
>>
>> Hello Marc,
>>
>> I was hoping you or Thomas might help clear some confusion
>> in my mind around IRQ domains (struct irq_domain).
>>
>> I have read https://www.kernel.org/doc/Documentation/IRQ-domain.txt
>>
>> IIUC, there should be one IRQ domain per IRQ controller.
>>
>> I have this MSI controller handling 256 interrupts, so I should
>> have *one* domain for all possible MSIs. Yet the Altera driver
>> registers *two* domains (msi_domain and inner_domain).
>>
>> Could I make everything work with a single IRQ domain?
> 
> No, because you have two irqchips. One that deals with the HW, and the
> other that deals with the MSIs how they are presented to the kernel,
> depending on the bus (PCI or something else). The fact that it doesn't
> really drive any HW doesn't make it irrelevant.

The example given in IRQ-domain.txt is

  Device --> IOAPIC -> Interrupt remapping Controller -> Local APIC -> CPU

with an irq_domain for each interrupt controller.


On my system I have:

  PCI-EP -> MSI controller -> System INTC -> GIC -> CPU

The driver for System INTC is drivers/irqchip/irq-tango.c
I think it has only one domain.

For the GIC, drivers/irqchip/irq-gic.c
I see a call to irq_domain_create_linear()

Is the handling of MSI different, and that is why we need
two domains? (Sorry, I did not understand that part well.)

When I looked at drivers/pci/host/pci-hyperv.c
they seem to have a single pci_msi_create_irq_domain call,
no call to domain_add or domain_create.
And they have a single struct irq_chip.

> You don't need to tell it anything about the number of interrupts you
> manage. As for your private structure, you've already given it to your
> low level domain, and there is no need to propagate it any further.

My main issue is that in the ack callback, I was in the "wrong"
domain, in that d->hwirq was not the MSI number. So I thought
I needed a single irq_domain.

Is there a function to map virq to the hwirq in any domain?

Regards.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ