lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 7 Jul 2015 02:51:36 +0200
From:	Frederic Weisbecker <fweisbec@...il.com>
To:	Peter Zijlstra <peterz@...radead.org>
Cc:	Fredrik Markström 
	<fredrik.markstrom@...il.com>, mingo@...hat.com,
	linux-kernel@...r.kernel.org, Rik van Riel <riel@...hat.com>,
	Jason Low <jason.low2@...com>
Subject: Re: [PATCH 1/1] cputime: Make the reported utime+stime correspond to
 the actual runtime.

On Thu, Jul 02, 2015 at 03:07:01PM +0200, Peter Zijlstra wrote:
> @@ -606,22 +600,46 @@ static void cputime_adjust(struct task_c
>  
>  	if (utime == 0) {
>  		stime = rtime;
> -	} else if (stime == 0) {
> -		utime = rtime;
> -	} else {
> -		cputime_t total = stime + utime;
> +		goto update;
> +	}
>  
> -		stime = scale_stime((__force u64)stime,
> -				    (__force u64)rtime, (__force u64)total);
> -		utime = rtime - stime;
> +	if (stime == 0) {
> +		utime = rtime;
> +		goto update;
>  	}
>  
> -	cputime_advance(&prev->stime, stime);
> -	cputime_advance(&prev->utime, utime);
> +	stime = scale_stime((__force u64)stime, (__force u64)rtime,
> +			    (__force u64)(stime + utime));
> +
> +	/*
> +	 * Make sure stime doesn't go backwards; this preserves monotonicity
> +	 * for utime because rtime is monotonic.
> +	 *
> +	 *  utime_i+1 = rtime_i+1 - stime_i

I'm not sure what is meant by _i+1.

I guess stime_i means prev->stime. stime_i+1 the new update of prev->stime
But then what is rtime_i and rtime_i+1 since we have no scaled rtime?

> +	 *            = rtime_i+1 - (rtime_i - stime_i)
> +	 *            = (rtime_i+1 - rtime_i) + stime_i
> +	 *            >= stime_i
> +	 */
> +	if (stime < prev->stime)
> +		stime = prev->stime;
> +	utime = rtime - stime;
> +
> +	/*
> +	 * Make sure utime doesn't go backwards; this still preserves
> +	 * monotonicity for stime, analogous argument to above.
> +	 */
> +	if (utime < prev->utime) {
> +		utime = prev->utime;
> +		stime = rtime - utime;

I see, so we are guaranteed that this final stime won't get below
prev->stime because older prev->stime + prev->utime <= newest rtime. I
guess that's more or less what's in the comments above :-)

> +	}
>  
> +update:
> +	prev->stime = stime;
> +	prev->utime = utime;
>  out:
>  	*ut = prev->utime;
>  	*st = prev->stime;
> +	raw_spin_unlock(&prev->lock);
>  }
>  
>  void task_cputime_adjusted(struct task_struct *p, cputime_t *ut, cputime_t *st)

Ok I scratched my head a lot on this patch and the issues behind and it looks
good to me. I worried about introducing a spinlock but we had two cmpxchg before
that. The overhead is close.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ