[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <ZR2R6wMhOpx6PVGT@gmail.com>
Date: Wed, 4 Oct 2023 18:25:15 +0200
From: Ingo Molnar <mingo@...nel.org>
To: Shrikanth Hegde <sshegde@...ux.vnet.ibm.com>
Cc: peterz@...radead.org, vincent.guittot@...aro.org,
srikar@...ux.vnet.ibm.com, linux-kernel@...r.kernel.org,
mingo@...hat.com, dietmar.eggemann@....com, mgorman@...e.de
Subject: Re: [RFC PATCH] sched/fair: Skip idle CPU search on busy system
* Shrikanth Hegde <sshegde@...ux.vnet.ibm.com> wrote:
> When the system is fully busy, there will not be any idle CPU's.
> In that case, load_balance will be called mainly with CPU_NOT_IDLE
> type. In should_we_balance its currently checking for an idle CPU if
> one exist. When system is 100% busy, there will not be an idle CPU and
> these idle_cpu checks can be skipped. This would avoid fetching those rq
> structures.
>
> This is a minor optimization for a specific case of 100% utilization.
>
> .....
> Coming to the current implementation. It is a very basic approach to the
> issue. It may not be the best/perfect way to this. It works only in
> case of CONFIG_NO_HZ_COMMON. nohz.nr_cpus is a global info available which
> tracks idle CPU's. AFAIU there isn't any other. If there is such info, we
> can use that instead. nohz.nr_cpus is atomic, which might be costly too.
>
> Alternative way would be to add a new attribute to sched_domain and update
> it in cpu idle entry/exit path per CPU. Advantage is, check can be per
> env->sd instead of global. Slightly complicated, but maybe better. there
> could other advantage at wake up to limit the scan etc.
>
> Your feedback would really help. Does this optimization makes sense?
>
> Signed-off-by: Shrikanth Hegde <sshegde@...ux.vnet.ibm.com>
> ---
> kernel/sched/fair.c | 6 ++++++
> 1 file changed, 6 insertions(+)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 373ff5f55884..903d59b5290c 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -10713,6 +10713,12 @@ static int should_we_balance(struct lb_env *env)
> return 1;
> }
>
> +#ifdef CONFIG_NO_HZ_COMMON
> + /* If the system is fully busy, its better to skip the idle checks */
> + if (env->idle == CPU_NOT_IDLE && atomic_read(&nohz.nr_cpus) == 0)
> + return group_balance_cpu(sg) == env->dst_cpu;
> +#endif
Not a big fan of coupling NOHZ to a scheduler optimization in this fashion,
and not a big fan of the nohz.nr_cpus global cacheline either.
I think it should be done unconditionally, via the scheduler topology tree:
- We should probably slow-propagate "permanently busy" status of a CPU
down the topology tree, ie.:
- mark a domain fully-busy with a delay & batching, probably driven
by the busy-tick only,
- while marking a domain idle instantly & propagating this up the
domain tree only if necessary. The propagation can stop if it
finds a non-busy domain, so usually it won't reach the root domain.
- This approach ensures there's no real overhead problem in the domain
tree: think of hundreds of CPUs all accessing the nohz.nr_cpus global
variable... I bet it's a measurable problem already on large systems.
- The "atomic_read(&nohz.nr_cpus) == 0" condition in your patch is simply
the busy-flag checked at the root domain: a readonly global cacheline
that never gets modified on a permanently busy system.
Thanks,
Ingo
Powered by blists - more mailing lists