[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20130410173219.GG21951@gmail.com>
Date: Wed, 10 Apr 2013 19:32:19 +0200
From: Ingo Molnar <mingo@...nel.org>
To: Frederic Weisbecker <fweisbec@...il.com>
Cc: Stanislaw Gruszka <sgruszka@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
linux-kernel@...r.kernel.org, hpa@...or.com, rostedt@...dmis.org,
akpm@...ux-foundation.org, tglx@...utronix.de,
linux-tip-commits@...r.kernel.org,
Linus Torvalds <torvalds@...ux-foundation.org>
Subject: Re: [tip:sched/core] sched: Lower chances of cputime scaling overflow
* Frederic Weisbecker <fweisbec@...il.com> wrote:
> 2013/4/10 Ingo Molnar <mingo@...nel.org>:
> >
> > * Frederic Weisbecker <fweisbec@...il.com> wrote:
> >
> >> Of course 128 bits ops are very expensive, so to help you evaluating the
> >> situation, this is going to happen on every call to task_cputime_adjusted() and
> >> thread_group_adjusted(), namely:
> >
> > It's really only expensive for divisions. Addition and multiplication should be
> > straightforward and relatively low overhead, especially on 64-bit platforms.
>
> Ok, well we still have one division in the scaling path. I'm mostly
> worried about the thread group exit that makes use of it through
> threadgroup_cputime_adjusted(). Not sure if we can avoid that.
I see, scale_stime()'s use of div64_u64_rem(), right?
I swapped out the details already, is there a link or commit ID that explains
where we hit 64-bit multiplication overflow? It's due to accounting in nanosecs,
spread out across thousands of tasks potentially, right?
But even with nsecs, a 64-bit variable ought to be able to hold hundreds of years
worth of runtime. How do we overflow?
Thanks,
Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists