lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Tue, 4 Jun 2024 10:10:29 +0800
From: Chen Yu <yu.c.chen@...el.com>
To: Tim Chen <tim.c.chen@...ux.intel.com>
CC: Peter Zijlstra <peterz@...radead.org>, Ingo Molnar <mingo@...hat.com>,
	Vincent Guittot <vincent.guittot@...aro.org>, <linux-kernel@...r.kernel.org>,
	Vinicius Gomes <vinicius.gomes@...el.com>
Subject: Re: [PATCH] sched/balance: Skip unnecessary updates to idle load
 balancer's flags

On 2024-06-03 at 09:13:47 -0700, Tim Chen wrote:
> On Mon, 2024-06-03 at 00:40 +0800, Chen Yu wrote:
> > > 
> > > With instrumentation, we found that 81% of the updates do not result in
> > > any change in the ilb_cpu's flags.  That is, multiple cpus are asking
> > > the ilb_cpu to do the same things over and over again, before the ilb_cpu
> > > has a chance to run NOHZ load balance.
> > > 
> > > Skip updates to ilb_cpu's flags if no new work needs to be done.
> > > Such updates do not change ilb_cpu's NOHZ flags.  This requires an extra
> > > atomic read but it is less expensive than frequent unnecessary atomic
> > > updates that generate cache bounces.
> > 
> > A race condition is that many CPUs choose the same ilb_cpu and ask it to trigger
> > the nohz idle balance. This is because find_new_ilb() always finds the first
> > nohz idle CPU. I wonder if we could change the
> > for_each_cpu_and(ilb_cpu, nohz.idle_cpus_mask, hk_mask)
> > into
> > for_each_cpu_wrap(ilb_cpu,  cpumask_and(nohz.idle_cpus_mask, hk_mask), this_cpu+1) 
> > so different ilb_cpu might be found by different CPUs.
> > Then the extra atomic read could brings less cache bounces.
> > 
> 
> Your proposal improves scaling.  However,
> that could result in many idle CPUs getting kicked.  I assume that
> current approach of delegating to a common idle CPU will disturb fewer CPUs
> and let them stay in deeper idle states, and get the power benefits
> from NOHZ scheme.
>

I see, from power point of view, current solution is better.
 
> > > 
> > > We saw that on the OLTP workload, cpu cycles from trigger_load_balance()
> > > (or sched_balance_trigger()) got reduced from 0.7% to 0.2%.
> > > 
> > > Signed-off-by: Tim Chen <tim.c.chen@...ux.intel.com>
> > > ---
> > >  kernel/sched/fair.c | 7 +++++++
> > >  1 file changed, 7 insertions(+)
> > > 
> > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > > index 8a5b1ae0aa55..9ab6dff6d8ac 100644
> > > --- a/kernel/sched/fair.c
> > > +++ b/kernel/sched/fair.c
> > > @@ -11891,6 +11891,13 @@ static void kick_ilb(unsigned int flags)
> > >  	if (ilb_cpu < 0)
> > >  		return;
> > >  
> > > +	/*
> > > +	 * Don't bother if no new NOHZ balance work items for ilb_cpu,
> > > +	 * i.e. all bits in flags are already set in ilb_cpu.
> > > +	 */
> > > +	if ((atomic_read(nohz_flags(ilb_cpu)) & flags) == flags)
> > 
> > Maybe also mention in the comment that when above statement is true, the
> > current ilb_cpu's flags is not 0 and in NOHZ_KICK_MASK, so return directly
> > here is safe(anyway just 2 cents)
> 
> Not sure I follow your comments about return being safe.  Let me explain
> in details.
> 
> We will return directly if and only if the bits set in flags are also set
> in nohz_flags(ilb_cpu).  
> 
> The comment's intention is to say that if the above statement is true, then
> the later operation of 
> 
> 	atomic_fetch_or(flags, nohz_flags(ilb_cpu))
> 
> will be useless and not result in any change to nohz_flags(ilb_cpu), since all the set bits
> in flags are already set in nohz_flags(ilb_cpu).

Understand. My previous thought was that, what if the current nohz_flags(ilb_cpu) is 0 or
NOHZ_NEWILB_KICK. If yes, return directly might miss one ipi to the ilb_cpu(because
the current code checks flags & NOHZ_KICK_MASK to return directly). But from the current
logic when we reach kick_ilb(), the flag is not 0, and the flag passed by nohz_balancer_kick()
satisfy (flags & NOHZ_KICK_MASK), so returns here is ok.

> 
> So will it be clearer if I say
> 
> 	/*
> 	 * Don't bother if no new NOHZ balance work items for ilb_cpu,
> 	 * i.e. all bits in flags are already set in ilb_cpu.
> 	 * Later OR of flags to nohz_flags(ilb_cpu)
> 	 * will not change nohz_flags(ilb_cpu).
> 	 */
>

Yes, this is ok.


thanks,
Chenyu
 
> Thanks.
> 
> 
> Tim
> 
> > Reviewed-by: Chen Yu <yu.c.chen@...el.com>
> > 
> > thanks,
> > Chenyu
> > 
> > > +		return;
> > > +
> > >  	/*
> > >  	 * Access to rq::nohz_csd is serialized by NOHZ_KICK_MASK; he who sets
> > >  	 * the first flag owns it; cleared by nohz_csd_func().
> > > -- 
> > > 2.32.0
> > > 
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ