lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 12 Mar 2013 18:52:52 +0100
From:	Frederic Weisbecker <fweisbec@...il.com>
To:	Stanislaw Gruszka <sgruszka@...hat.com>
Cc:	LKML <linux-kernel@...r.kernel.org>,
	Steven Rostedt <rostedt@...dmis.org>,
	Peter Zijlstra <peterz@...radead.org>,
	Ingo Molnar <mingo@...nel.org>,
	Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [PATCH 2/2] sched: Lower chances of cputime scaling overflow

2013/3/7 Stanislaw Gruszka <sgruszka@...hat.com>:
> On Wed, Mar 06, 2013 at 05:06:55PM +0100, Frederic Weisbecker wrote:
>> If this solution appears not to be enough in the end, we'll
>> need to partly revert back to the behaviour prior to commit
>>      0cf55e1ec08bb5a22e068309e2d8ba1180ab4239
>>      ("sched, cputime: Introduce thread_group_times()")
>>
>> Back then, the scaling was done on exit() time before adding the cputime
>> of an exiting thread to the signal struct. And then we'll need to
>> scale one-by-one the live threads cputime in thread_group_cputime(). The
>> drawback may be a slightly slower code on exit time.
>
> I do not see this part in the patch ? What I can see is just scaling
> algorithm change.

It's only a suggestion on how to solve the issue if this patch is not
enough. It's not yet implemented but if we still get reports of
overflow we'll need to go there unfortunately.

>
>> -static cputime_t scale_stime(cputime_t stime, cputime_t rtime, cputime_t total)
>> +static cputime_t scale_stime(u64 stime, u64 rtime, u64 total)
>>  {
>> -     u64 temp = (__force u64) rtime;
>> +     u64 rem, res, scaled;
>>
>> -     temp *= (__force u64) stime;
>> -
>> -     if (sizeof(cputime_t) == 4)
>> -             temp = div_u64(temp, (__force u32) total);
>> -     else
>> -             temp = div64_u64(temp, (__force u64) total);
>> +     if (rtime >= total) {
>> +             res = div64_u64_rem(rtime, total, &rem);
>> +             scaled = stime * res;
>> +             scaled += div64_u64(stime * rem, total);
>> +     } else {
>> +             res = div64_u64_rem(total, rtime, &rem);
>> +             scaled = div64_u64(stime, res);
>> +             scaled -= div64_u64(scaled * rem, total);
>
> Those calculus are not obvious. Perhaps it should be commented, how
> they evolved from scaled = (rtime*stime)/total ?

Yeah sure, I'll add some comments.

>
>> +     } else if (!total) {
>>               stime = rtime;
>
> I would prefer stime = rtime/2 (hence utime will be rtime/2 too), but this
> is not so important.

I can do that.

Thanks.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ