lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20240613071342.GA1810503@google.com>
Date: Thu, 13 Jun 2024 07:13:42 +0000
From: Joel Fernandes <joel@...lfernandes.org>
To: Vincent Guittot <vincent.guittot@...aro.org>
Cc: linux-kernel@...r.kernel.org, Ingo Molnar <mingo@...hat.com>,
	Peter Zijlstra <peterz@...radead.org>,
	Juri Lelli <juri.lelli@...hat.com>,
	Dietmar Eggemann <dietmar.eggemann@....com>,
	Steven Rostedt <rostedt@...dmis.org>,
	Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
	Daniel Bristot de Oliveira <bristot@...hat.com>,
	Valentin Schneider <vschneid@...hat.com>,
	"Vineeth Pillai (Google)" <vineeth@...byteword.org>,
	Suleiman Souhlal <suleiman@...gle.com>,
	Frederic Weisbecker <frederic@...nel.org>,
	"Paul E . McKenney" <paulmck@...nel.org>
Subject: Re: [PATCH 3/3] sched: Update ->next_balance correctly during
 newidle balance

Getting to this pretty late, sorry, see below.

On Tue, Nov 14, 2023 at 04:43:12PM +0100, Vincent Guittot wrote:
> Le jeudi 09 nov. 2023 à 10:02:54 (+0000), Joel Fernandes a écrit :
> > Hi Vincent,
> > 
> > Sorry for late reply, I was in Tokyo all these days and was waiting to get to
> > writing a proper reply. See my replies below:
> > 
> > On Thu, Oct 26, 2023 at 04:23:35PM +0200, Vincent Guittot wrote:
> > > On Sun, 22 Oct 2023 at 02:28, Joel Fernandes <joel@...lfernandes.org> wrote:
> > > >
> > > > On Fri, Oct 20, 2023 at 03:40:14PM +0200, Vincent Guittot wrote:
> > > > > On Fri, 20 Oct 2023 at 03:40, Joel Fernandes (Google)
> > > > > <joel@...lfernandes.org> wrote:
> > > > > >
> > > > > > From: "Vineeth Pillai (Google)" <vineeth@...byteword.org>
> > > > > >
> > > > > > When newidle balancing triggers, we see that it constantly clobbers
> > > > > > rq->next_balance even when there is no newidle balance happening due to
> > > > > > the cost estimates.  Due to this, we see that periodic load balance
> > > > > > (rebalance_domains) may trigger way more often when the CPU is going in
> > > > > > and out of idle at a high rate but is no really idle. Repeatedly
> > > > > > triggering load balance there is a bad idea as it is a heavy operation.
> > > > > > It also causes increases in softirq.
> > > > >
> > > > > we have 2 balance intervals:
> > > > > - one when idle based on the sd->balance_interval = sd_weight
> > > > > - one when busy which increases the period by multiplying it with
> > > > > busy_factor = 16
> > > >
> > > > On my production system I see load balance triggering every 4 jiffies! In a
> > > 
> > > Which kind of system do you have? sd->balance_interval is in ms
> > 
> > Yes, sorry I meant it triggers every jiffies which is extreme sometimes. It
> > is an ADL SoC (12th gen Intel, 4 P cores 8 E cores) get_sd_balance_interval()
> > returns 4 jiffies there. On my Qemu system, I see 8 jiffies.
> 
> Do you have details about the sched_domain  hierarchy ?
> That could be part of your problem (see below)

The hierarchy is pretty simple:

$ cat /sys/kernel/debug/sched/domains/cpu*/domain0/name
MC
MC
MC
MC

I boot qemu like this by passing "-smp cpus=4,threads=1,sockets=1"

> > 
> > [...]
> > > > > > Another issue is ->last_balance is not updated after newidle balance
> > > > > > causing mistakes in the ->next_balance calculations.
> > > > >
> > > > > newly idle load balance is not equal to idle load balance. It's a
> > > > > light load balance trying to pull one  task and you can't really
> > > > > consider it to the normal load balance
> > > >
> > > > True. However the point is that it is coupled with the other load balance
> > > > mechanism and the two are not independent. As you can see below, modifying
> > > > rq->next_balance in newidle also causes the periodic balance to happen more
> > > > aggressively as well if there is a high transition from busy to idle and
> > > > viceversa.
> > > 
> > > As mentioned, rq->next_balance is updated whenever cpu enters idle
> > > (i.e. in newidle_balance() but it's not related with doing a newly
> > > idle load balance.
> > 
> > Yes, I understand that. But my point was that the update of rq->next_balance
> > from the newidle path is itself buggy and interferes with the load balance
> > happening from the tick (trigger_load_balance -> run_rebalance_domains).
> 
> Newidle path is not buggy. It only uses sd->last_balance + interval to
> estimate the next balance  which is the correct thing to do. Your problem
> comes from the update of sd->last_balance which never happens and remains
> in the past whereas you call run_rebalance_domains() which should
> run load_balance for all domains with a sd->last_balance + interval in the
> past.
> Your problem most probably comes from the should_we_balance which always or
> "almost always" returns false in your use case for some sched_domain and
> prevents to updat sd->last_balance. Could you try the patch below ?
> It should fix your problem of trying to rebalance every tick whereas
> rebalance_domain is called.
> At least this should show if it's your problem but I'm not sure it's the right
> things to do all the time ...

I tried your diff below. It did not make a difference to the problem. Only
this patch series made a ~10-20x softirq reduction.

> 
> ---
>  kernel/sched/fair.c | 18 ++++++------------
>  1 file changed, 6 insertions(+), 12 deletions(-)
> 
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 3745ca289240..9ea1f42e5362 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -11568,17 +11568,6 @@ static void rebalance_domains(struct rq *rq, enum cpu_idle_type idle)
>  		need_decay = update_newidle_cost(sd, 0);
>  		max_cost += sd->max_newidle_lb_cost;
> 
> -		/*
> -		 * Stop the load balance at this level. There is another
> -		 * CPU in our sched group which is doing load balancing more
> -		 * actively.
> -		 */
> -		if (!continue_balancing) {
> -			if (need_decay)
> -				continue;
> -			break;
> -		}
> -
>  		interval = get_sd_balance_interval(sd, busy);
> 
>  		need_serialize = sd->flags & SD_SERIALIZE;
> @@ -11588,7 +11577,12 @@ static void rebalance_domains(struct rq *rq, enum cpu_idle_type idle)
>  		}
> 
>  		if (time_after_eq(jiffies, sd->last_balance + interval)) {
> -			if (load_balance(cpu, rq, sd, idle, &continue_balancing)) {
> +			/*
> +			 * Stop the load balance at this level. There is another
> +			 * CPU in our sched group which is doing load balancing more
> +			 * actively.
> +			 */
> +			if (continue_balancing && load_balance(cpu, rq, sd, idle, &continue_balancing)) {

This diff did not solve the problem. Let me go see what other paths are not
updating sd->last_balance in the run_rebalance_domains()..

thanks,

 - Joel


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ