lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250410102945.GD30687@noisy.programming.kicks-ass.net>
Date: Thu, 10 Apr 2025 12:29:45 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: K Prateek Nayak <kprateek.nayak@....com>
Cc: Ingo Molnar <mingo@...hat.com>, Juri Lelli <juri.lelli@...hat.com>,
	Vincent Guittot <vincent.guittot@...aro.org>,
	linux-kernel@...r.kernel.org,
	Dietmar Eggemann <dietmar.eggemann@....com>,
	Steven Rostedt <rostedt@...dmis.org>,
	Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
	Valentin Schneider <vschneid@...hat.com>,
	"Gautham R. Shenoy" <gautham.shenoy@....com>,
	Swapnil Sapkal <swapnil.sapkal@....com>
Subject: Re: [RFC PATCH 5/5] sched/fair: Proactive idle balance using push
 mechanism

On Wed, Apr 09, 2025 at 11:15:39AM +0000, K Prateek Nayak wrote:
> Proactively try to push tasks to one of the CPUs set in the
> "nohz.idle_cpus_mask" from the push callback.
> 
> pick_next_pushable_fair_task() is taken from Vincent's series [1] as is
> but the locking rules in push_fair_task() has been relaxed to release
> the local rq lock after dequeuing the task and reacquiring it after
> pushing it to the idle target.
> 
> double_lock_balance() used in RT seems necessary to maintain strict
> priority ordering however that may not be necessary for fair tasks.

Agreed; don't use double_lock_balance() if you can at all avoid it. It
is quite terrible.


>  /*
>   * See if the non running fair tasks on this rq can be sent on other CPUs
>   * that fits better with their profile.
>   */
>  static bool push_fair_task(struct rq *rq)
>  {
> +	struct cpumask *cpus = this_cpu_cpumask_var_ptr(load_balance_mask);
> +	struct task_struct *p = pick_next_pushable_fair_task(rq);
> +	int cpu, this_cpu = cpu_of(rq);
> +
> +	if (!p)
> +		return false;
> +
> +	if (!cpumask_and(cpus, nohz.idle_cpus_mask, housekeeping_cpumask(HK_TYPE_KERNEL_NOISE)))
> +		goto requeue;

So I think the main goal here should be to get rid of the whole single
nohz balancing thing.

This global state/mask has been shown to be a problem over and over again.

Ideally we keep a nohz idle mask per LLC (right next to the overload
mask you introduced earlier), along with a bit in the sched_domain tree
upwards of that to indicate a particular llc/ node / distance-group has
nohz idle.

Then if the topmost domain has the bit set it means there are nohz cpus
to be found, and we can (slowly) iterate the domain tree up from
overloaded LLC to push tasks around.

Anyway, yes, you gotta start somewhere :-)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ