[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20211208053237.GA18550@windriver.com>
Date: Wed, 8 Dec 2021 00:32:37 -0500
From: Paul Gortmaker <paul.gortmaker@...driver.com>
To: "Paul E. McKenney" <paulmck@...nel.org>
Cc: linux-kernel@...r.kernel.org,
Frederic Weisbecker <fweisbec@...il.com>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...nel.org>
Subject: Re: [PATCH 1/2] sched/isolation: really align nohz_full with
rcu_nocbs
[Re: [PATCH 1/2] sched/isolation: really align nohz_full with rcu_nocbs] On 06/12/2021 (Mon 13:33) Paul E. McKenney wrote:
> On Mon, Dec 06, 2021 at 09:59:49AM -0500, Paul Gortmaker wrote:
> > At the moment it is currently possible to sneak a core into nohz_full
> > that lies between nr_possible and NR_CPUS - but you won't "see" it
> > because cpumask_pr_args() implicitly hides anything above nr_cpu_ids.
> >
> > This becomes a problem when the nohz_full CPU set doesn't contain at
> > least one other valid nohz CPU - in which case we end up with the
> > tick_nohz_full_running set and no tick core specified, which trips an
> > endless sequence of WARN() and renders the machine unusable.
> >
> > I inadvertently opened the door for this when fixing an overly
> > restrictive nohz_full conditional in the below Fixes: commit - and then
> > courtesy of my optimistic ACPI reporting nr_possible of 64 (the default
> > Kconfig for NR_CPUS) and the not-so helpful implict filtering done by
> > cpumask_pr_args, I unfortunately did not spot it during my testing.
> >
> > So here, I don't rely on what was printed anymore, but code exactly what
> > our restrictions should be in order to be aligned with rcu_nocbs - which
> > was the original goal. Since the checks lie in "__init" code it is largely
> > free for us to do this anyway.
> >
> > Building with NOHZ_FULL and NR_CPUS=128 on an otherwise defconfig, and
> > booting with "rcu_nocbs=8-127 nohz_full=96-127" on the same 16 core T5500
> > Dell machine now results in the following (only relevant lines shown):
> >
> > smpboot: Allowing 64 CPUs, 48 hotplug CPUs
> > setup_percpu: NR_CPUS:128 nr_cpumask_bits:128 nr_cpu_ids:64 nr_node_ids:2
> > housekeeping: kernel parameter 'nohz_full=' or 'isolcpus=' contains nonexistent CPUs.
> > housekeeping: kernel parameter 'nohz_full=' or 'isolcpus=' has no valid CPUs.
> > rcu: RCU restricting CPUs from NR_CPUS=128 to nr_cpu_ids=64.
> > rcu: Note: kernel parameter 'rcu_nocbs=', 'nohz_full', or 'isolcpus=' contains nonexistent CPUs.
> > rcu: Offload RCU callbacks from CPUs: 8-63.
> >
> > One can see both new housekeeping checks are triggered in the above.
> > The same invalid boot arg combination would have previously resulted in
> > an infinitely scrolling mix of WARN from all cores per tick on this box.
> >
> > Fixes: 915a2bc3c6b7 ("sched/isolation: Reconcile rcu_nocbs= and nohz_full=")
> > Cc: Paul E. McKenney <paulmck@...nel.org>
> > Cc: Frederic Weisbecker <fweisbec@...il.com>
> > Cc: Thomas Gleixner <tglx@...utronix.de>
> > Cc: Ingo Molnar <mingo@...nel.org>
> > Signed-off-by: Paul Gortmaker <paul.gortmaker@...driver.com>
> > ---
> > kernel/sched/isolation.c | 12 ++++++++++++
> > 1 file changed, 12 insertions(+)
> >
> > diff --git a/kernel/sched/isolation.c b/kernel/sched/isolation.c
> > index 7f06eaf12818..01abc8400d6c 100644
> > --- a/kernel/sched/isolation.c
> > +++ b/kernel/sched/isolation.c
> > @@ -89,6 +89,18 @@ static int __init housekeeping_setup(char *str, enum hk_flags flags)
> > return 0;
> > }
> >
> > + if (!cpumask_subset(non_housekeeping_mask, cpu_possible_mask)) {
> > + pr_info("housekeeping: kernel parameter 'nohz_full=' or 'isolcpus=' contains nonexistent CPUs.\n");
> > + cpumask_and(non_housekeeping_mask, cpu_possible_mask,
> > + non_housekeeping_mask);
> > + }
> > +
> > + if (cpumask_empty(non_housekeeping_mask)) {
> > + pr_info("housekeeping: kernel parameter 'nohz_full=' or 'isolcpus=' has no valid CPUs.\n");
> > + free_bootmem_cpumask_var(non_housekeeping_mask);
> > + return 0;
>
> If Frederic applies his rcu_nocbs work to nohz_full, it may some day be
> valid to specify an empty nohz_full CPU mask. Of course, it might well
> be that warning in the meantime is a good thing, but I figured that I
> should call attention to the possibility.
It isn't just a good thing ; it is required. Call chain is as this:
nohz_full= / isolcpus=
housekeeping_nohz_full_setup / housekeeping_isolcpus_setup
housekeeping_setup
tick_nohz_full_setup
tick_nohz_full_running = true;
So housekeeping setup is the "last chance" to validate inputs and
avoid calling tick_nohz_full_setup which unconditionally sets the
tick_nohz_full_running (as the crux of this problem).
At least that is as things stand today based on my understanding.
Paul.
--
>
> Thanx, Paul
>
> > + }
> > +
> > alloc_bootmem_cpumask_var(&tmp);
> > if (!housekeeping_flags) {
> > alloc_bootmem_cpumask_var(&housekeeping_mask);
> > --
> > 2.17.1
> >
Powered by blists - more mailing lists