[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190718132108.GA22220@redhat.com>
Date: Thu, 18 Jul 2019 15:21:08 +0200
From: Oleg Nesterov <oleg@...hat.com>
To: Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Thomas Gleixner <tglx@...utronix.de>
Cc: Andrew Fox <afox@...hat.com>,
Stephen Johnston <sjohnsto@...hat.com>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] sched/cputime: make scale_stime() more precise
To simplify the review, see the code with this patch applied:
/*
* Perform (stime * rtime) / total, but avoid multiplication overflow
* by losing precision when the numbers are big.
*
* NOTE! currently the only user is cputime_adjust() and thus
*
* stime < total && rtime > total
*
* this means that the end result is always precise and the additional
* div64_u64_rem() inside the main loop is called at most once.
*/
static u64 scale_stime(u64 stime, u64 rtime, u64 total)
{
u64 res = 0, div, rem;
/* can stime * rtime overflow ? */
while (ilog2(stime) + ilog2(rtime) > 62) {
if (stime > rtime)
swap(rtime, stime);
if (rtime >= total) {
/*
* (rtime * stime) / total is equal to
*
* (rtime / total) * stime +
* (rtime % total) * stime / total
*
* if nothing overflows. Can the 1st multiplication
* overflow? Yes, but we do not care: this can only
* happen if the end result can't fit in u64 anyway.
*
* So the code below does
*
* res += (rtime / total) * stime;
* rtime = rtime % total;
*/
div = div64_u64_rem(rtime, total, &rem);
res += div * stime;
rtime = rem;
continue;
}
/* drop precision */
rtime >>= 1;
total >>= 1;
if (!total)
return res;
}
return res + div64_u64(stime * rtime, total);
}
Powered by blists - more mailing lists