[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190718145549.GA22902@redhat.com>
Date: Thu, 18 Jul 2019 16:55:49 +0200
From: Oleg Nesterov <oleg@...hat.com>
To: Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Thomas Gleixner <tglx@...utronix.de>
Cc: Andrew Fox <afox@...hat.com>,
Stephen Johnston <sjohnsto@...hat.com>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] sched/cputime: make scale_stime() more precise
On 07/18, Oleg Nesterov wrote:
>
> + * NOTE! currently the only user is cputime_adjust() and thus
> + *
> + * stime < total && rtime > total
> + *
> + * this means that the end result is always precise and the additional
> + * div64_u64_rem() inside the main loop is called at most once.
Ah, I just noticed that the comment is not 100% correct... in theory we
can drop the precision and even do div64_u64_rem() more than once, but this
can only happen if stime or total = stime + utime is "really" huge, I don't
think this can happen in practice...
We can probably just do
static u64 scale_stime(u64 stime, u64 rtime, u64 total)
{
u64 res = 0, div, rem;
if (ilog2(stime) + ilog2(rtime) > 62) {
div = div64_u64_rem(rtime, total, &rem);
res += div * stime;
rtime = rem;
int shift = ilog2(stime) + ilog2(rtime) - 62;
if (shift > 0) {
rtime >>= shift;
total >>= shitt;
if (!total)
return res;
}
}
return res + div64_u64(stime * rtime, total);
}
but this way the code looks less generic.
Oleg.
Powered by blists - more mailing lists