[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1236876609.5090.934.camel@laptop>
Date: Thu, 12 Mar 2009 17:50:09 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Timur Tabi <timur@...escale.com>
Cc: Grant Likely <grant.likely@...retlab.ca>,
Benjamin Herrenschmidt <benh@...nel.crashing.org>,
Alan Cox <alan@...rguk.ukuu.org.uk>,
linux-kernel@...r.kernel.org, rdreier@...co.com,
jirislaby@...il.com, will.newton@...il.com, hancockrwd@...il.com,
jeremy@...p.org
Subject: Re: [PATCH v4] introduce macro spin_event_timeout()
On Thu, 2009-03-12 at 11:19 -0500, Timur Tabi wrote:
> Grant Likely wrote:
>
> > #define spin_until_timeout(condition, timeout, rc) \
> > for (unsigned long __timeout = jiffies + (timeout); \
> > (!(rc = (condition)) && time_after(jiffies, __timeout)); )
>
> Ooo, that's good.
>
> I'm still not crazy about using jiffies, since it doesn't get
> incremented when interrupts are disabled, and I'd like this function to
> work in those cases. How about get_cycles()? I know it's not supported
> on all platforms, but I'm willing to live with that.
>
> The other problem with get_cycles() is that there doesn't appear to be a
> num_cycles_per_usec() function, so there's no way for me to scale the
> count to a fixed time period.
sched_clock() does that, but:
- it falls back to jiffies on poor platforms
- it requires to be called with IRQs disabled
- it can basically jump any random way on funky hardware
then there is cpu_clock(int cpu):
- still falls back to jiffies on poor platforms
- is monotonic when used on the same cpu
- can drift up to a few jiffies when used between cpus
But something that seems to always work, is simply count loops and rely
on whatever delay is in the specified loop.
#define spin_until_timeout(condition, timeout, rc) \
for (unsigned long __timeout = 0; \
!(rc = (condition)) && __timeout < (timeout); \
__timeout++)
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists