lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 2 Apr 2009 18:57:53 +0200
From:	Ingo Molnar <mingo@...e.hu>
To:	Stanislaw Gruszka <sgruszka@...hat.com>,
	Thomas Gleixner <tglx@...utronix.de>,
	Oleg Nesterov <oleg@...hat.com>,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>
Cc:	linux-kernel@...r.kernel.org
Subject: Re: [PATCH 0/2] itimers: periodic timers fixes


* Stanislaw Gruszka <sgruszka@...hat.com> wrote:

> Hi.
> 
> We found the periodic timers ITIMER_PROF and ITIMER_VIRT are 
> unreliable, they have systematic timing error. For example period 
> of 10000 us will not be represented by the kernel as 10 ticks, but 
> 11 (for HZ=1000). The reason is that the frequency of the hardware 
> timer can only be chosen in discrete steps and the actual 
> frequency is about 1000.152 Hz. So 10 ticks would take only about 
> 9.9985 ms, the kernel decides it must never return earlier than 
> requested, so it rounds the period up to 11 ticks. This results in 
> a systematic multiplicative timing error of -10 %. The situation 
> is even worse where application try to request with 1 thick 
> period. It will get the signal once per two kernel ticks, not on 
> every tick. The systematic multiplicative timing error is -50 %. 
> He have program [1] that shows itimers systematic error, results 
> are below [2].
> 
> To fix situation we wrote two patches. First one just simplify 
> code related with itimers. Second is fix, it change intervals 
> measurement resolutions and correct times when signal is 
> generated. However this add some drawback, that I'm not sure if 
> are acceptable:
> 
> - the time between two consecutive tics can be smaller than 
>   requested interval
> 
> - intervals values which are returned to user by getitimer() are 
>   not rounded up
> 
> Second drawback mean that applications which first call 
> setitimer() then call getitimer() to see if interval was round up 
> and to correct timings, will potentially stop works. However this 
> can be only problem with requested interval smaller than 1/HZ, as 
> for intervals > 1/Hz we can generate signals with proper 
> resolution.

Converting those to GTOD sampling instead of jiffies sampling is a 
worthwile change IMO and a good concept.

The unificaton of ITIMER_PROF and ITIMER_VIRT is a nice observation 
and a good patch.

The second one, changing all the sampling from cputime to ktime_t is 
nicely done too:

We could do more though, there's still a bit of cputime legacies 
around:

+	cputime_t cval, nval;

Couldnt all of that go over into the ktime_t space as well, phasing 
out cputime logic from the itimer code?

The user ABI is struct timeval based, so there's no need to have 
cputime anywhere. The scheduler does nanoseconds accurate stats so 
it can be connected up there too.

	Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ