[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <cfa138e9-38e3-e566-8903-1d64024c917b@redhat.com>
Date: Thu, 4 Feb 2021 13:47:38 -0500
From: Nitesh Narayan Lal <nitesh@...hat.com>
To: Marcelo Tosatti <mtosatti@...hat.com>,
Thomas Gleixner <tglx@...utronix.de>
Cc: Robin Murphy <robin.murphy@....com>, linux-kernel@...r.kernel.org,
linux-api@...r.kernel.org, frederic@...nel.org,
juri.lelli@...hat.com, abelits@...vell.com, bhelgaas@...gle.com,
linux-pci@...r.kernel.org, rostedt@...dmis.org, mingo@...nel.org,
peterz@...radead.org, davem@...emloft.net,
akpm@...ux-foundation.org, sfr@...b.auug.org.au,
stephen@...workplumber.org, rppt@...ux.vnet.ibm.com,
jinyuqi@...wei.com, zhangshaokun@...ilicon.com
Subject: Re: [Patch v4 1/3] lib: Restrict cpumask_local_spread to houskeeping
CPUs
On 2/4/21 1:15 PM, Marcelo Tosatti wrote:
> On Thu, Jan 28, 2021 at 09:01:37PM +0100, Thomas Gleixner wrote:
>> On Thu, Jan 28 2021 at 13:59, Marcelo Tosatti wrote:
>>>> The whole pile wants to be reverted. It's simply broken in several ways.
>>> I was asking for your comments on interaction with CPU hotplug :-)
>> Which I answered in an seperate mail :)
>>
>>> So housekeeping_cpumask has multiple meanings. In this case:
>> ...
>>
>>> So as long as the meaning of the flags are respected, seems
>>> alright.
>> Yes. Stuff like the managed interrupts preference for housekeeping CPUs
>> when a affinity mask spawns housekeeping and isolated is perfectly
>> fine. It's well thought out and has no limitations.
>>
>>> Nitesh, is there anything preventing this from being fixed
>>> in userspace ? (as Thomas suggested previously).
>> Everything with is not managed can be steered by user space.
> Yes, but it seems to be racy (that is, there is a window where the
> interrupt can be delivered to an isolated CPU).
>
> ethtool ->
> xgbe_set_channels ->
> xgbe_full_restart_dev ->
> xgbe_alloc_memory ->
> xgbe_alloc_channels ->
> cpumask_local_spread
>
> Also ifconfig eth0 down / ifconfig eth0 up leads
> to cpumask_spread_local.
There's always that possibility.
We have to ensure that we move the IRQs by a tuned daemon or some other
userspace script every time there is a net-dev change (eg. device comes up,
creates VFs, etc).
> How about adding a new flag for isolcpus instead?
>
Do you mean a flag based on which we can switch the affinity mask to
housekeeping for all the devices at the time of IRQ distribution?
--
Thanks
Nitesh
Powered by blists - more mailing lists