lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20170221002708.GT21222@n2100.armlinux.org.uk>
Date:   Tue, 21 Feb 2017 00:27:08 +0000
From:   Russell King - ARM Linux <linux@...linux.org.uk>
To:     David Riley <davidriley@...omium.org>
Cc:     Thomas Gleixner <tglx@...utronix.de>,
        John Stultz <john.stultz@...aro.org>,
        LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] kernel: time: Modify test_udelay to allow for 1%
 tolerance.

On Mon, Feb 20, 2017 at 03:59:08PM -0800, David Riley wrote:
> test_udelay had a tolerance of udelay() being up to 0.5% fast but
> that tolerance is insufficient for some platforms.  For ARM, the error
> is around 0.7% so increase the test to allow for up to 1% which was
> previously described as being acceptable for udelay().

I haven't measured it recently, but bear in mind that if the number of
CPU cycles it takes to service a timer interrupt increases, the %age
error also increases.

That's purely down to there being a fixed number of CPU cycles between
timer interrupts - those CPU cycles can either be spent in the udelay
loop or servicing the timer interrupt.  The bigger the number of CPU
cycles servicing the timer interrupt, the smaller the number of CPU
cycles that can be spent in udelay(), and the less accurate udelay()
becomes.

However, as CPU clock speeds increase, the number of cycles spent
servicing the timer interrupt becomes less significant.  So the %age
error is really very difficult to quantify.

Also note that cpufreq comes into play in a major way with CPU-loops
based delays (to the point that I'd say using cpufreq without a
hardware timer based udelay is a major bug.)  If cpufreq has the
ability to scale the CPU clock rate by a factor of ten, then the
CPU-loops based udelay() can change its delay by a factor of ten as
well - even though cpufreq modifies loops_per_jiffy to compensate
for the change.  (software-based udelay computes the number of
loop iterations based on loops_per_jiffy, and then counts the loops.
If the CPU clock rate changes mid-count, then it'll either be longer
or shorter.)

So, the whole thing is a can of worms... and at the end of the day
it all adds up to CPU-loops udelay()s are very very approximate,
and trying to put a figure on an acceptable tolerance isn't going
to work very well.

Notice that Linus' opinion in his email is if udelay() is within 5%,
he's not going to worry:

 If it's about 1% off, it's all fine. If somebody picked a delay value
 that is so sensitive to small errors in the delay that they notice
 that - or even notice something like 5% - then they have picked too
 short of a delay.

So, I think we should allow at least 5% errors, on the grounds that
if a 5% error causes people a problem "then they have picked too
short of a delay." as said by our lead penguin.

-- 
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ