lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170703133048.GA27097@destiny>
Date:   Mon, 3 Jul 2017 09:30:49 -0400
From:   Josef Bacik <josef@...icpanda.com>
To:     Ingo Molnar <mingo@...nel.org>
Cc:     josef@...icpanda.com, mingo@...hat.com, peterz@...radead.org,
        linux-kernel@...r.kernel.org, kernel-team@...com,
        Josef Bacik <jbacik@...com>
Subject: Re: [RFC][PATCH] sched: attach extra runtime to the right avg

On Sun, Jul 02, 2017 at 11:37:18AM +0200, Ingo Molnar wrote:
> * josef@...icpanda.com <josef@...icpanda.com> wrote:
> 
> > From: Josef Bacik <jbacik@...com>
> > 
> > We only track the load avg of a se in 1024 ns chunks, so in order to
> > make up for the loss of the < 1024 ns part of a run/sleep delta we only
> > add the time we processed to the se->avg.last_update_time.  The problem
> > is there is no way to know if this extra time was while we were asleep
> > or while we were running.  Instead keep track of the remainder and apply
> > it in the appropriate place.  If the remainder was while we were
> > running, add it to the delta the next time we update the load avg while
> > running, and the same for sleeping.  This (coupled with other fixes)
> > mostly fixes the regression to my workload introduced by Peter's
> > experimental runnable load propagation patches.
> > 
> > Signed-off-by: Josef Bacik <jbacik@...com>
> 
> > @@ -2897,12 +2904,16 @@ ___update_load_avg(u64 now, int cpu, struct sched_avg *sa,
> >  	 * Use 1024ns as the unit of measurement since it's a reasonable
> >  	 * approximation of 1us and fast to compute.
> >  	 */
> > +	remainder = delta & (1023UL);
> > +	sa->last_update_time = now;
> > +	if (running)
> > +		sa->run_remainder = remainder;
> > +	else
> > +		sa->sleep_remainder = remainder;
> >  	delta >>= 10;
> >  	if (!delta)
> >  		return 0;
> >  
> > -	sa->last_update_time += delta << 10;
> > -
> 
> So I'm wondering, this chunk changes how sa->last_update_time is maintained in 
> ___update_load_avg(): the new code takes a precise timestamp, but the old code was 
> not taking an imprecise timestamp, but was updating it via deltas - where each 
> delta was rounded down to the nearest 1024 nsecs boundary.
> 
> That, if this is the main code path that updates ->last_update_time, creates a 
> constant drift of rounding error that skews ->last_update_time into larger and 
> larger distances from the real 'now' - ever increasing the value of 'delta'.
> 
> An intermediate approach to improve that skew would be something like below. It 
> doesn't track the remainder like your patch does, but doesn't lose precision 
> either, just rounds down 'now' to the nearest 1024 boundary.
> 
> Does this fix the regression you observed as well? Totally untested.
>

Yup this fixes my problem as well, I'm good with this if you prefer this
approach.  Thanks,

Josef 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ