lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20130411080432.GA1380@redhat.com>
Date:	Thu, 11 Apr 2013 10:04:33 +0200
From:	Stanislaw Gruszka <sgruszka@...hat.com>
To:	Ingo Molnar <mingo@...nel.org>
Cc:	Frederic Weisbecker <fweisbec@...il.com>,
	Peter Zijlstra <peterz@...radead.org>,
	linux-kernel@...r.kernel.org, hpa@...or.com, rostedt@...dmis.org,
	akpm@...ux-foundation.org, tglx@...utronix.de,
	linux-tip-commits@...r.kernel.org,
	Linus Torvalds <torvalds@...ux-foundation.org>
Subject: Re: [tip:sched/core] sched: Lower chances of cputime scaling overflow

On Wed, Apr 10, 2013 at 07:32:19PM +0200, Ingo Molnar wrote:
> * Frederic Weisbecker <fweisbec@...il.com> wrote:
> 
> > 2013/4/10 Ingo Molnar <mingo@...nel.org>:
> > >
> > > * Frederic Weisbecker <fweisbec@...il.com> wrote:
> > >
> > >> Of course 128 bits ops are very expensive, so to help you evaluating the
> > >> situation, this is going to happen on every call to task_cputime_adjusted() and
> > >> thread_group_adjusted(), namely:
> > >
> > > It's really only expensive for divisions. Addition and multiplication should be
> > > straightforward and relatively low overhead, especially on 64-bit platforms.
> > 
> > Ok, well we still have one division in the scaling path. I'm mostly
> > worried about the thread group exit that makes use of it through
> > threadgroup_cputime_adjusted(). Not sure if we can avoid that.
> 
> I see, scale_stime()'s use of div64_u64_rem(), right?
> 
> I swapped out the details already, is there a link or commit ID that explains 
> where we hit 64-bit multiplication overflow? It's due to accounting in nanosecs,

No, values are converted to jiffies and then are multiplied. CONFIG_HZ=1000
make issue happen earlier compared to CONFIG_HZ=100.

> spread out across thousands of tasks potentially, right?

Thousands of tasks (in one process) running on thousands of CPUs machine
will make problem reproducible in hours or maybe minutes.
 
> But even with nsecs, a 64-bit variable ought to be able to hold hundreds of years 
> worth of runtime. How do we overflow?

We do rtime * stime. If process utilize 100% CPU we have stime = rtime .
That will overflow if rtime >= sqrt(0xffffffffffffffff) jiffies. That is
rtime >= 49 days assuming CONFIG_HZ=1000. With 50 threads running on
CPUS > 50 machine, this will give 1 day. In real application stime is
never equal to rtime, but can be near half of it, so still problem is
easy achievable.

Using quotient and remainder make problem less reproducible, but it
still happen depending on rtime / total ratio.

I wrote user space program that make same calculations as kernel and python
script that generate stime/rtime/total values (attached). It shows problem
can still be reproducible. For example:

FAIL!
rtime: 25386774344 <- 293 days (one thread, HZ=1000)
total: 27958813690
stime: 8387644107
kernel: 8275815093 <- kernel value
python: 7616032303 <- correct value

FAIL!
rtime: 16877346691 <- 195 days (one thread, HZ=1000)
total: 12215365298
stime: 4886146119
kernel: 5240812402 <- kernel value
python: 6750938676 <- correct value

Stanislaw


View attachment "scale_stime.c" of type "text/plain" (1029 bytes)

View attachment "scale_stime_test.py" of type "text/plain" (890 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ