[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201019111137.GL2628@hirez.programming.kicks-ass.net>
Date: Mon, 19 Oct 2020 13:11:37 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Nitesh Narayan Lal <nitesh@...hat.com>
Cc: linux-kernel@...r.kernel.org, netdev@...r.kernel.org,
linux-pci@...r.kernel.org, intel-wired-lan@...ts.osuosl.org,
frederic@...nel.org, mtosatti@...hat.com, sassmann@...hat.com,
jesse.brandeburg@...el.com, lihong.yang@...el.com,
helgaas@...nel.org, jeffrey.t.kirsher@...el.com,
jacob.e.keller@...el.com, jlelli@...hat.com, hch@...radead.org,
bhelgaas@...gle.com, mike.marciniszyn@...el.com,
dennis.dalessandro@...el.com, thomas.lendacky@....com,
jiri@...dia.com, mingo@...hat.com, juri.lelli@...hat.com,
vincent.guittot@...aro.org, lgoncalv@...hat.com
Subject: Re: [PATCH v4 4/4] PCI: Limit pci_alloc_irq_vectors() to
housekeeping CPUs
On Sun, Oct 18, 2020 at 02:14:46PM -0400, Nitesh Narayan Lal wrote:
> >> + hk_cpus = housekeeping_num_online_cpus(HK_FLAG_MANAGED_IRQ);
> >> +
> >> + /*
> >> + * If we have isolated CPUs for use by real-time tasks, to keep the
> >> + * latency overhead to a minimum, device-specific IRQ vectors are moved
> >> + * to the housekeeping CPUs from the userspace by changing their
> >> + * affinity mask. Limit the vector usage to keep housekeeping CPUs from
> >> + * running out of IRQ vectors.
> >> + */
> >> + if (hk_cpus < num_online_cpus()) {
> >> + if (hk_cpus < min_vecs)
> >> + max_vecs = min_vecs;
> >> + else if (hk_cpus < max_vecs)
> >> + max_vecs = hk_cpus;
> > is that:
> >
> > max_vecs = clamp(hk_cpus, min_vecs, max_vecs);
>
> Yes, I think this will do.
>
> >
> > Also, do we really need to have that conditional on hk_cpus <
> > num_online_cpus()? That is, why can't we do this unconditionally?
>
> FWIU most of the drivers using this API already restricts the number of
> vectors based on the num_online_cpus, if we do it unconditionally we can
> unnecessary duplicate the restriction for cases where we don't have any
> isolated CPUs.
unnecessary isn't really a concern here, this is a slow path. What's
important is code clarity.
> Also, different driver seems to take different factors into consideration
> along with num_online_cpus while finding the max_vecs to request, for
> example in the case of mlx5:
> MLX5_CAP_GEN(dev, num_ports) * num_online_cpus() +
> MLX5_EQ_VEC_COMP_BASE
>
> Having hk_cpus < num_online_cpus() helps us ensure that we are only
> changing the behavior when we have isolated CPUs.
>
> Does that make sense?
That seems to want to allocate N interrupts per cpu (plus some random
static amount, which seems weird, but whatever). This patch breaks that.
So I think it is important to figure out what that driver really wants
in the nohz_full case. If it wants to retain N interrupts per CPU, and
only reduce the number of CPUs, the proposed interface is wrong.
> > And what are the (desired) semantics vs hotplug? Using a cpumask without
> > excluding hotplug is racy.
>
> The housekeeping_mask should still remain constant, isn't?
> In any case, I can double check this.
The goal is very much to have that dynamically configurable.
Powered by blists - more mailing lists