lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 15 May 2020 19:24:03 +0200
From:   Oleg Nesterov <oleg@...hat.com>
To:     Ingo Molnar <mingo@...hat.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Thomas Gleixner <tglx@...utronix.de>
Cc:     Andrew Fox <afox@...hat.com>,
        Stephen Johnston <sjohnsto@...hat.com>,
        linux-kernel@...r.kernel.org,
        Stanislaw Gruszka <sgruszka@...hat.com>,
        Linus Torvalds <torvalds@...ux-foundation.org>
Subject: Re: [PATCH v2] sched/cputime: make scale_stime() more precise

ping...

Peter, could you comment?

On 01/27, Oleg Nesterov wrote:
>
> People report that utime and stime from /proc/<pid>/stat become very
> wrong when the numbers are big enough, especially if you watch these
> counters incrementally.
> 
> Say, if the monitored process runs 100 days 50/50 in user/kernel mode
> it looks as if it runs 20 minutes entirely in kernel mode, then 20
> minutes in user mode. See the test-case which tries to demonstrate this
> behaviour:
> 
> 	https://lore.kernel.org/lkml/20200124154215.GA14714@redhat.com/
> 
> The new implementation does the additional div64_u64_rem() but according
> to my naive measurements it is faster on x86_64, much faster if rtime/etc
> are big enough. See
> 
> 	https://lore.kernel.org/lkml/20200123130541.GA30620@redhat.com/
> 
> Signed-off-by: Oleg Nesterov <oleg@...hat.com>
> ---
>  kernel/sched/cputime.c | 65 +++++++++++++++++++++++++-------------------------
>  1 file changed, 32 insertions(+), 33 deletions(-)
> 
> diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c
> index d43318a..ae1ea09 100644
> --- a/kernel/sched/cputime.c
> +++ b/kernel/sched/cputime.c
> @@ -528,42 +528,41 @@ void account_idle_ticks(unsigned long ticks)
>   */
>  static u64 scale_stime(u64 stime, u64 rtime, u64 total)
>  {
> -	u64 scaled;
> +	u64 res = 0, div, rem;
> +	int shift;
>  
> -	for (;;) {
> -		/* Make sure "rtime" is the bigger of stime/rtime */
> -		if (stime > rtime)
> -			swap(rtime, stime);
> -
> -		/* Make sure 'total' fits in 32 bits */
> -		if (total >> 32)
> -			goto drop_precision;
> -
> -		/* Does rtime (and thus stime) fit in 32 bits? */
> -		if (!(rtime >> 32))
> -			break;
> -
> -		/* Can we just balance rtime/stime rather than dropping bits? */
> -		if (stime >> 31)
> -			goto drop_precision;
> -
> -		/* We can grow stime and shrink rtime and try to make them both fit */
> -		stime <<= 1;
> -		rtime >>= 1;
> -		continue;
> -
> -drop_precision:
> -		/* We drop from rtime, it has more bits than stime */
> -		rtime >>= 1;
> -		total >>= 1;
> +	/* can stime * rtime overflow ? */
> +	if (ilog2(stime) + ilog2(rtime) > 62) {
> +		/*
> +		 * (rtime * stime) / total is equal to
> +		 *
> +		 *	(rtime / total) * stime +
> +		 *	(rtime % total) * stime / total
> +		 *
> +		 * if nothing overflows. Can the 1st multiplication
> +		 * overflow? Yes, but we do not care: this can only
> +		 * happen if the end result can't fit in u64 anyway.
> +		 *
> +		 * So the code below does
> +		 *
> +		 *	res = (rtime / total) * stime;
> +		 *	rtime = rtime % total;
> +		 */
> +		div = div64_u64_rem(rtime, total, &rem);
> +		res = div * stime;
> +		rtime = rem;
> +
> +		shift = ilog2(stime) + ilog2(rtime) - 62;
> +		if (shift > 0) {
> +			/* drop precision */
> +			rtime >>= shift;
> +			total >>= shift;
> +			if (!total)
> +				return res;
> +		}
>  	}
>  
> -	/*
> -	 * Make sure gcc understands that this is a 32x32->64 multiply,
> -	 * followed by a 64/32->64 divide.
> -	 */
> -	scaled = div_u64((u64) (u32) stime * (u64) (u32) rtime, (u32)total);
> -	return scaled;
> +	return res + div64_u64(stime * rtime, total);
>  }
>  
>  /*
> -- 
> 2.5.0
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ