[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7b810c5a-c732-4df3-1034-c2959ae86e65@redhat.com>
Date: Wed, 27 Jan 2021 09:16:37 -0500
From: Nitesh Narayan Lal <nitesh@...hat.com>
To: Marcelo Tosatti <mtosatti@...hat.com>,
Robin Murphy <robin.murphy@....com>,
Thomas Gleixner <tglx@...utronix.de>,
"frederic@...nel.org" <frederic@...nel.org>
Cc: linux-kernel@...r.kernel.org, linux-api@...r.kernel.org,
juri.lelli@...hat.com, abelits@...vell.com, bhelgaas@...gle.com,
linux-pci@...r.kernel.org, rostedt@...dmis.org, mingo@...nel.org,
peterz@...radead.org, davem@...emloft.net,
akpm@...ux-foundation.org, sfr@...b.auug.org.au,
stephen@...workplumber.org, rppt@...ux.vnet.ibm.com,
jinyuqi@...wei.com, zhangshaokun@...ilicon.com
Subject: Re: [Patch v4 1/3] lib: Restrict cpumask_local_spread to houskeeping
CPUs
On 1/27/21 8:09 AM, Marcelo Tosatti wrote:
> On Wed, Jan 27, 2021 at 12:36:30PM +0000, Robin Murphy wrote:
>> On 2021-01-27 12:19, Marcelo Tosatti wrote:
>>> On Wed, Jan 27, 2021 at 11:57:16AM +0000, Robin Murphy wrote:
>>>> Hi,
>>>>
>>>> On 2020-06-25 23:34, Nitesh Narayan Lal wrote:
>>>>> From: Alex Belits <abelits@...vell.com>
>>>>>
>>>>> The current implementation of cpumask_local_spread() does not respect the
>>>>> isolated CPUs, i.e., even if a CPU has been isolated for Real-Time task,
>>>>> it will return it to the caller for pinning of its IRQ threads. Having
>>>>> these unwanted IRQ threads on an isolated CPU adds up to a latency
>>>>> overhead.
>>>>>
>>>>> Restrict the CPUs that are returned for spreading IRQs only to the
>>>>> available housekeeping CPUs.
>>>>>
>>>>> Signed-off-by: Alex Belits <abelits@...vell.com>
>>>>> Signed-off-by: Nitesh Narayan Lal <nitesh@...hat.com>
>>>>> ---
>>>>> lib/cpumask.c | 16 +++++++++++-----
>>>>> 1 file changed, 11 insertions(+), 5 deletions(-)
>>>>>
>>>>> diff --git a/lib/cpumask.c b/lib/cpumask.c
>>>>> index fb22fb266f93..85da6ab4fbb5 100644
>>>>> --- a/lib/cpumask.c
>>>>> +++ b/lib/cpumask.c
>>>>> @@ -6,6 +6,7 @@
>>>>> #include <linux/export.h>
>>>>> #include <linux/memblock.h>
>>>>> #include <linux/numa.h>
>>>>> +#include <linux/sched/isolation.h>
>>>>> /**
>>>>> * cpumask_next - get the next cpu in a cpumask
>>>>> @@ -205,22 +206,27 @@ void __init free_bootmem_cpumask_var(cpumask_var_t mask)
>>>>> */
>>>>> unsigned int cpumask_local_spread(unsigned int i, int node)
>>>>> {
>>>>> - int cpu;
>>>>> + int cpu, hk_flags;
>>>>> + const struct cpumask *mask;
>>>>> + hk_flags = HK_FLAG_DOMAIN | HK_FLAG_MANAGED_IRQ;
>>>>> + mask = housekeeping_cpumask(hk_flags);
>>>> AFAICS, this generally resolves to something based on cpu_possible_mask
>>>> rather than cpu_online_mask as before, so could now potentially return an
>>>> offline CPU. Was that an intentional change?
>>> Robin,
>>>
>>> AFAICS online CPUs should be filtered.
>> Apologies if I'm being thick, but can you explain how? In the case of
>> isolation being disabled or compiled out, housekeeping_cpumask() is
>> literally just "return cpu_possible_mask;". If we then iterate over that
>> with for_each_cpu() and just return the i'th possible CPU (e.g. in the
>> NUMA_NO_NODE case), what guarantees that CPU is actually online?
>>
>> Robin.
> Nothing, but that was the situation before 1abdfe706a579a702799fce465bceb9fb01d407c
> as well.
Marcelo, before the commit cpumask_local_spread, was in fact, relying on
cpu_online_mask as Robin mentioned.
The problem here is with housekeeping_cpumask which always relied on the
cpu_possible_mask.
>
> cpumask_local_spread() should probably be disabling CPU hotplug.
Yes and this should also be done at several other places in the drivers
which don't take CPU hotplug into account eg. at the time of vector
allocation.
--
Thanks
Nitesh
Powered by blists - more mailing lists