[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210414091100.000033cf@intel.com>
Date: Wed, 14 Apr 2021 09:11:00 -0700
From: Jesse Brandeburg <jesse.brandeburg@...el.com>
To: Nitesh Narayan Lal <nitesh@...hat.com>
Cc: Marcelo Tosatti <mtosatti@...hat.com>,
Thomas Gleixner <tglx@...utronix.de>,
"frederic@...nel.org" <frederic@...nel.org>,
Robin Murphy <robin.murphy@....com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-api@...r.kernel.org" <linux-api@...r.kernel.org>,
"juri.lelli@...hat.com" <juri.lelli@...hat.com>,
"abelits@...vell.com" <abelits@...vell.com>,
"bhelgaas@...gle.com" <bhelgaas@...gle.com>,
"linux-pci@...r.kernel.org" <linux-pci@...r.kernel.org>,
"rostedt@...dmis.org" <rostedt@...dmis.org>,
"mingo@...nel.org" <mingo@...nel.org>,
"peterz@...radead.org" <peterz@...radead.org>,
"davem@...emloft.net" <davem@...emloft.net>,
"akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
"sfr@...b.auug.org.au" <sfr@...b.auug.org.au>,
"stephen@...workplumber.org" <stephen@...workplumber.org>,
"rppt@...ux.vnet.ibm.com" <rppt@...ux.vnet.ibm.com>,
"jinyuqi@...wei.com" <jinyuqi@...wei.com>,
"zhangshaokun@...ilicon.com" <zhangshaokun@...ilicon.com>,
netdev@...r.kernel.org, chris.friesen@...driver.com
Subject: Re: [Patch v4 1/3] lib: Restrict cpumask_local_spread to
houskeeping CPUs
Nitesh Narayan Lal wrote:
> > The original issue as seen, was that if you rmmod/insmod a driver
> > *without* irqbalance running, the default irq mask is -1, which means
> > any CPU. The older kernels (this issue was patched in 2014) used to use
> > that affinity mask, but the value programmed into all the interrupt
> > registers "actual affinity" would end up delivering all interrupts to
> > CPU0,
>
> So does that mean the affinity mask for the IRQs was different wrt where
> the IRQs were actually delivered?
> Or, the affinity mask itself for the IRQs after rmmod, insmod was changed
> to 0 instead of -1?
The smp_affinity was 0xfff, and the kernel chooses which interrupt to
place the interrupt on, among any of the bits set.
> I did a quick test on top of 5.12.0-rc6 by comparing the i40e IRQ affinity
> mask before removing the kernel module and after doing rmmod+insmod
> and didn't find any difference.
with the patch in question removed? Sorry, I'm confused what you tried.
>
> > and if the machine was under traffic load incoming when the
> > driver loaded, CPU0 would start to poll among all the different netdev
> > queues, all on CPU0.
> >
> > The above then leads to the condition that the device is stuck polling
> > even if the affinity gets updated from user space, and the polling will
> > continue until traffic stops.
> >
> >> The problem with the commit is that when we overwrite the affinity mask
> >> based on the hinting mask we completely ignore the default SMP affinity
> >> mask. If we do want to overwrite the affinity based on the hint mask we
> >> should atleast consider the default SMP affinity.
>
> For the issue where the IRQs don't follow the default_smp_affinity mask
> because of this patch, the following are the steps by which it can be easily
> reproduced with the latest linux kernel:
>
> # Kernel
> 5.12.0-rc6+
<snip>
> As we can see in the above trace the initial affinity for the IRQ 1478 was
> correctly set as per the default_smp_affinity mask which includes CPU 42,
> however, later on, it is updated with CPU3 which is returned from
> cpumask_local_spread().
>
> > Maybe the right thing is to fix which CPUs are passed in as the valid
> > mask, or make sure the kernel cross checks that what the driver asks
> > for is a "valid CPU"?
> >
>
> Sure, if we can still reproduce the problem that your patch was fixing then
> maybe we can consider adding a new API like cpumask_local_spread_irq in
> which we should consider deafult_smp_affinity mask as well before returning
> the CPU.
I'm sure I don't have a reproducer of the original problem any more, it
is lost somewhere 8 years ago. I'd like to be able to repro the original
issue, but I can't.
Your description of the problem makes it obvious there is an issue. It
appears as if cpumask_local_spread() is the wrong function to use here.
If you have any suggestions please let me know.
We had one other report of this problem as well (I'm not sure if it's
the same as your report)
https://lkml.org/lkml/2021/3/28/206
https://lists.osuosl.org/pipermail/intel-wired-lan/Week-of-Mon-20210125/023120.html
Powered by blists - more mailing lists