[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1365703670.10217.10.camel@laptop>
Date: Thu, 11 Apr 2013 20:07:50 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Stanislaw Gruszka <sgruszka@...hat.com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Ingo Molnar <mingo@...nel.org>,
"H. Peter Anvin" <hpa@...or.com>,
Frédéric Weisbecker <fweisbec@...il.com>,
Steven Rostedt <rostedt@...dmis.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Thomas Gleixner <tglx@...utronix.de>,
linux-tip-commits@...r.kernel.org
Subject: Re: [tip:sched/core] sched: Lower chances of cputime scaling
overflow
On Thu, 2013-04-11 at 08:38 -0700, Linus Torvalds wrote:
> On Thu, Apr 11, 2013 at 6:45 AM, Peter Zijlstra <peterz@...radead.org> wrote:
> > On Tue, 2013-03-26 at 15:01 +0100, Stanislaw Gruszka wrote:
> >> Thoughts?
> >
> > Would something like the below work?
>
> Ugh, this is hard to think about, it's also fairly inefficient.
>
> > static cputime_t scale_stime(u64 stime, u64 rtime, u64 total)
> > {
> > - u64 rem, res, scaled;
> > + int stime_fls = fls64(stime);
> > + int total_fls = fls64(total);
> > + int rtime_fls = fls64(rtime);
>
> Doing "fls64()" unconditionally is quite expensive on some
> architectures,
Oh, I (wrongly it appears) assumed that fls was something cheap :/
> and if I am not mistaken, the *common* case (by far) is
> that all these values fit in 32 bits, no?
It depends on if we use cputime_jiffies.h or cputime_nsec.h and I'm
completely lost as to which we default to atm. But we sure can reduce
to 32 bits in most cases without too much problems.
But that would mean fls() and shifting again for nsec based cputime.
I'll have a better read and think about the rest of your email but
that'll have to be tomorrow :/
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists