lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191211121650.GB10385@j38d01266.eu95sqa>
Date:   Wed, 11 Dec 2019 20:16:50 +0800
From:   Peng Wang <rocking@...ux.alibaba.com>
To:     Peter Zijlstra <peterz@...radead.org>
Cc:     mingo@...hat.com, juri.lelli@...hat.com,
        vincent.guittot@...aro.org, dietmar.eggemann@....com,
        rostedt@...dmis.org, bsegall@...gle.com, mgorman@...e.de,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH] schied/fair: Skip updating "contrib" without load

On Mon, Dec 09, 2019 at 05:16:27PM +0100, Peter Zijlstra wrote:
> On Sat, Dec 07, 2019 at 12:14:22AM +0800, Peng Wang wrote:
> > We only update load_sum/runnable_load_sum/util_sum with
> > decayed old sum when load is clear.
> 
> What you're saying is that because of the:
> 
> 	if (!load)
> 		runnable = running = 0;
> 
> clause in ___update_load_sum(), all the actual users of @contrib in
> accumulate_sum():
> 
> 	if (load)
> 		sa->load_sum += load * contrib;
> 	if (runnable)
> 		sa->runnable_load_sum += runnable * contrib;
> 	if (running)
> 		sa->util_sum += contrib << SCHED_CAPACITY_SHIFT;
> 
> don't happen, and therefore we don't care what @contrib actually is and
> calculating it is pointless.

Yes.

> 
> I suppose that is so. did you happen to have performance numbers? Also,
> I'm thinking this wants a comment.

Actually I don't know how to get the exact performance data.
But I count the times when @load equals zero and not as below:

		if (load) {
			load_is_not_zero_count++;
			contrib = __accumulate_pelt_segments(periods,
					1024 - sa->period_contrib, delta);
		} else
			load_is_zero_count++;

As we can see, load_is_zero_count is much bigger than
load_is_zero_count, and the gap is gradually widening.

load_is_zero_count:            6016044 times
load_is_not_zero_count:         244316 times
19:50:43 up 1 min,  1 user,  load average: 0.09, 0.06, 0.02

load_is_zero_count:            7956168 times
load_is_not_zero_count:         261472 times
19:51:42 up 2 min,  1 user,  load average: 0.03, 0.05, 0.01

load_is_zero_count:           10199896 times
load_is_not_zero_count:         278364 times
19:52:51 up 3 min,  1 user,  load average: 0.06, 0.05, 0.01

load_is_zero_count:           14333700 times
load_is_not_zero_count:         318424 times
19:54:53 up 5 min,  1 user,  load average: 0.01, 0.03, 0.00

Perhaps we can gain some performance advantage by saving these unnecessary calculation.

> 
> > Signed-off-by: Peng Wang <rocking@...ux.alibaba.com>
> > ---
> >  kernel/sched/pelt.c | 5 +++--
> >  1 file changed, 3 insertions(+), 2 deletions(-)
> > 
> > diff --git a/kernel/sched/pelt.c b/kernel/sched/pelt.c
> > index a96db50..4392953 100644
> > --- a/kernel/sched/pelt.c
> > +++ b/kernel/sched/pelt.c
> > @@ -129,8 +129,9 @@ static u32 __accumulate_pelt_segments(u64 periods, u32 d1, u32 d3)
> >  		 * Step 2
> >  		 */
> >  		delta %= 1024;
> > -		contrib = __accumulate_pelt_segments(periods,
> > -				1024 - sa->period_contrib, delta);
> > +		if (load)
> > +			contrib = __accumulate_pelt_segments(periods,
> > +					1024 - sa->period_contrib, delta);
> >  	}
> >  	sa->period_contrib = delta;
> >  
> > -- 
> > 1.8.3.1
> > 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ