[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a0cd800b-f20f-64f3-c3b3-7f0c193c2b16@gmail.com>
Date: Wed, 21 Mar 2018 23:08:38 +0530
From: valmiki <valmikibow@...il.com>
To: Marc Zyngier <marc.zyngier@....com>,
linux-pci <linux-pci@...r.kernel.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Cc: Bjorn Helgaas <helgaas@...nel.org>
Subject: Re: Why two irq chips for MSI
> On 21/03/18 17:12, valmiki wrote:
>> Hi,
>>
>> In most of the RP drivers, why two irq chips are being used for MSI ?
>>
>> One at irq_domain_set_info (which uses irq_compose_msi_msg and
>> irq_set_affinity methods) and another being registered with struct
>> msi_domain_info (which uses irq_mask/irq_unmask methods).
>>
>> When will each chip be used w.r.t to virq ?
>
> A simple way to think of it is that you have two pieces of HW involved:
> an end-point that generates an interrupt, and a controller that receives it.
>
> Transpose this to the kernel view of things: one chip implements the PCI
> MSI, with the PCI semantics attached to it (how to program the
> payload/doorbell into the end-point, for example). The other implements
> the MSI controller part of it, talking to the HW that deals with the
> interrupt.
>
> Does it makes sense? Admittedly, this is not always that simple, but
> that the general approach.
>
Thanks Marc. Yes got a good picture now.
So the one which implements PCI semantics has irq_set_affinity, which is
being invoked at request_irq. Why most of the drivers have this as dummy
with return 0 ?
Does setting affinity to MSI needs any support from GIC?
Setting affinity can be achieved only with hardware support ?
Valmiki
---
This email has been checked for viruses by Avast antivirus software.
https://www.avast.com/antivirus
Powered by blists - more mailing lists