[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <98443f19-c653-493e-a2a9-e1d07b9d8468@redhat.com>
Date: Wed, 3 Apr 2024 23:01:03 -0400
From: Waiman Long <longman@...hat.com>
To: Pierre Gondois <pierre.gondois@....com>, linux-kernel@...r.kernel.org
Cc: Aaron Lu <aaron.lu@...el.com>, Rui Zhang <rui.zhang@...el.com>,
Anna-Maria Behnsen <anna-maria@...utronix.de>,
Frederic Weisbecker <frederic@...nel.org>, Ingo Molnar <mingo@...nel.org>,
Thomas Gleixner <tglx@...utronix.de>, Peter Zijlstra <peterz@...radead.org>,
Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>, Ben Segall <bsegall@...gle.com>,
Mel Gorman <mgorman@...e.de>, Daniel Bristot de Oliveira
<bristot@...hat.com>, Valentin Schneider <vschneid@...hat.com>,
Tejun Heo <tj@...nel.org>, Michal Hocko <mhocko@...e.com>
Subject: Re: [PATCH 0/7] sched/fair|isolation: Correctly clear
nohz.[nr_cpus|idle_cpus_mask] for isolated CPUs
On 4/3/24 11:05, Pierre Gondois wrote:
> Zhang Rui reported that find_new_ilb() was iterating over CPUs in
> isolated cgroup partitions. This triggered spurious wakeups for
> theses CPUs. [1]
> The initial approach was to ignore CPUs on NULL sched domains, as
> isolated CPUs have a NULL sched domain. However a CPU:
> - with its tick disabled, so taken into account in
> nohz.[idle_cpus_mask|nr_cpus]
> - which is placed in an isolated cgroup partition
> will never update nohz.[idle_cpus_mask|nr_cpus] again.
>
> To avoid that, the following variables should be cleared
> when a CPU is placed in an isolated cgroup partition:
> - nohz.idle_cpus_mask
> - nohz.nr_cpus
> - rq->nohz_tick_stopped
> This would allow to avoid considering wrong nohz.* values during
> idle load balance.
>
> As suggested in [2] and to avoid calling nohz_balance_[enter|exit]_idle()
> from a remote CPU and create concurrency issues, leverage the existing
> housekeeping HK_TYPE_SCHED mask to reflect isolated CPUs (i.e. on NULL
> sched domains).
> Indeed the HK_TYPE_SCHED mask is currently never set by the
> isolcpus/nohz_full kernel parameters, so it defaults to cpu_online_mask.
> Plus it's current usage fits CPUs that are isolated and should
> not take part in load balancing.
>
> Making use of HK_TYPE_SCHED for this purpose implies creating a
> housekeeping mask which can be modified at runtime.
>
> [1] https://lore.kernel.org/all/20230804090858.7605-1-rui.zhang@intel.com/
> [2] https://lore.kernel.org/all/CAKfTPtAMd_KNKhXXGk5MEibzzQUX3BFkWgxtEW2o8FFTX99DKw@mail.gmail.com/
>
> Pierre Gondois (7):
> sched/isolation: Introduce housekeeping_runtime isolation
> sched/isolation: Move HK_TYPE_SCHED to housekeeping runtime
> sched/isolation: Use HKR_TYPE_SCHED in find_new_ilb()
> sched/fair: Move/add on_null_domain()/housekeeping_cpu() checks
> sched/topology: Remove CPUs with NULL sd from HKR_TYPE_SCHED mask
> sched/fair: Remove on_null_domain() and redundant checks
> sched/fair: Clear idle_cpus_mask for CPUs with NULL sd
>
> include/linux/sched/isolation.h | 30 ++++++++++++++++++++-
> include/linux/sched/nohz.h | 2 ++
> kernel/sched/fair.c | 44 +++++++++++++++++-------------
> kernel/sched/isolation.c | 48 ++++++++++++++++++++++++++++++++-
> kernel/sched/topology.c | 7 +++++
> 5 files changed, 110 insertions(+), 21 deletions(-)
>
I had also posted a patch series on excluding isolated CPUs in isolated
partitions from housekeeping cpumasks earlier this year. See
https://lore.kernel.org/lkml/20240229021414.508972-1-longman@redhat.com/
It took a different approach from this series. It looks like I should
include HK_TYPE_MISC as well.
Cheers,
Longman
Powered by blists - more mailing lists