[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <59a4f188-0d37-bf40-baf6-80b4fa46df52@deltatee.com>
Date: Thu, 31 Jan 2019 16:52:10 -0700
From: Logan Gunthorpe <logang@...tatee.com>
To: Dave Jiang <dave.jiang@...el.com>, linux-kernel@...r.kernel.org,
linux-ntb@...glegroups.com, linux-pci@...r.kernel.org,
iommu@...ts.linux-foundation.org, linux-kselftest@...r.kernel.org,
Jon Mason <jdmason@...zu.us>,
Bjorn Helgaas <bhelgaas@...gle.com>,
Joerg Roedel <joro@...tes.org>
Cc: Allen Hubbe <allenbh@...il.com>,
Serge Semin <fancer.lancer@...il.com>,
Eric Pilmore <epilmore@...aio.com>
Subject: Re: [PATCH 0/9] Support using MSI interrupts in ntb_transport
On 2019-01-31 4:48 p.m., Dave Jiang wrote:
>
> On 1/31/2019 4:41 PM, Logan Gunthorpe wrote:
>>
>> On 2019-01-31 3:46 p.m., Dave Jiang wrote:
>>> I believe irqbalance writes to the file /proc/irq/N/smp_affinity. So
>>> maybe take a look at the code that starts from there and see if it would
>>> have any impact on your stuff.
>> Ok, well on my system I can write to the smp_affinity all day and the
>> MSI interrupts still work fine.
>
> Maybe your code is ok then. If the stats show up in /proc/interrupts
> then you can see it moving to different cores.
Yes, I did check that the stats change CPU in proc interrupts.
> Yeah I'm not sure what to do about it either as I'm not super familiar
> with that area either. Just making note of what I encountered. And you
> are right, the updated info has to go over NTB for the other side to
> write to the updated place. So there's a lot of latency involved.
Ok, well I'll implement the callback anyway for v2. Better safe than
sorry. We can operate on the assumption that someone thought of the race
condition and if we ever see reports of lost interrupts we'll know where
to look.
Logan
Powered by blists - more mailing lists