[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20170216215242.GA5138@atrey.karlin.mff.cuni.cz>
Date: Thu, 16 Feb 2017 22:52:43 +0100
From: Pavel Machek <pavel@....cz>
To: Russell King <rmk+kernel@...linux.org.uk>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>,
Andrew Morton <akpm@...ux-foundation.org>,
LKML <linux-kernel@...r.kernel.org>,
David Riley <davidriley@...omium.org>,
John Stultz <john.stultz@...aro.org>
Subject: Re: [PATCH] Add explanation of udelay() inaccuracy
Hi!
> +++ b/include/linux/delay.h
> @@ -5,6 +5,17 @@
> * Copyright (C) 1993 Linus Torvalds
> *
> * Delay routines, using a pre-computed "loops_per_jiffy" value.
> + *
> + * Please note that ndelay(), udelay() and mdelay() may return early for
> + * several reasons:
> + * 1. computed loops_per_jiffy too low (due to the time taken to
> + * execute the timer interrupt.)
> + * 2. cache behaviour affecting the time it takes to execute the
> + * loop function.
> + * 3. CPU clock rate changes.
> + *
Hmm. Formulated like this, it would mean that udelay(100) can return
in 10usec (because of clock rate changes). No way can drivers work
reliably in that case.
Can we formulate something more useful? We don't want driver writers
to delay 10 times more "just for cpufreq", right?
Pavel
--
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html
Powered by blists - more mailing lists