[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5f8b1616-a005-7658-978f-16466fcc3886@deltatee.com>
Date: Thu, 31 Jan 2019 16:41:32 -0700
From: Logan Gunthorpe <logang@...tatee.com>
To: Dave Jiang <dave.jiang@...el.com>, linux-kernel@...r.kernel.org,
linux-ntb@...glegroups.com, linux-pci@...r.kernel.org,
iommu@...ts.linux-foundation.org, linux-kselftest@...r.kernel.org,
Jon Mason <jdmason@...zu.us>,
Bjorn Helgaas <bhelgaas@...gle.com>,
Joerg Roedel <joro@...tes.org>
Cc: Allen Hubbe <allenbh@...il.com>,
Serge Semin <fancer.lancer@...il.com>,
Eric Pilmore <epilmore@...aio.com>
Subject: Re: [PATCH 0/9] Support using MSI interrupts in ntb_transport
On 2019-01-31 3:46 p.m., Dave Jiang wrote:
> I believe irqbalance writes to the file /proc/irq/N/smp_affinity. So
> maybe take a look at the code that starts from there and see if it would
> have any impact on your stuff.
Ok, well on my system I can write to the smp_affinity all day and the
MSI interrupts still work fine.
The MSI code is a bit difficult to trace and audit with all the
different chips and the parent chips which I don't have a good
understanding of. But I can definitely see that it could be possible for
some chips to change the address as smp_affinitiy will eventually
sometimes call msi_domain_set_affinity() which does seem to recompose
the message and write it back to the chip.
So, I could relatively easily add a callback to msi_desc to catch this
and resend the MSI address/data. However, I'm not sure how this is ever
done atomically. It seems like there would be a race while the device
updates its address where old interrupts could be triggered. This race
would be much longer for us when sending this information over the NTB
link. Though, I guess if the only change is that it encodes CPU
information in the address then that would not be an issue. However, I'm
not sure I can say that for certain without a comprehensive
understanding of all the IRQ chips.
Any thoughts on this?
Logan
Powered by blists - more mailing lists