[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <617E1C2C70743745A92448908E030B2A023EB8FB@scsmsx411.amr.corp.intel.com>
Date: Fri, 24 Aug 2007 09:19:47 -0700
From: "Luck, Tony" <tony.luck@...el.com>
To: "Denys Vlasenko" <vda.linux@...glemail.com>,
"Satyam Sharma" <satyam@...radead.org>
Cc: "Heiko Carstens" <heiko.carstens@...ibm.com>,
"Herbert Xu" <herbert@...dor.apana.org.au>,
"Chris Snook" <csnook@...hat.com>, <clameter@....com>,
"Linux Kernel Mailing List" <linux-kernel@...r.kernel.org>,
<linux-arch@...r.kernel.org>,
"Linus Torvalds" <torvalds@...ux-foundation.org>,
<netdev@...r.kernel.org>,
"Andrew Morton" <akpm@...ux-foundation.org>, <ak@...e.de>,
<davem@...emloft.net>, <schwidefsky@...ibm.com>,
<wensong@...ux-vs.org>, <horms@...ge.net.au>,
<wjiang@...ilience.com>, <cfriesen@...tel.com>, <zlynx@....org>,
<rpjday@...dspring.com>, <jesper.juhl@...il.com>,
<segher@...nel.crashing.org>
Subject: RE: [PATCH] i386: Fix a couple busy loops in mach_wakecpu.h:wait_for_init_deassert()
>> static inline void wait_for_init_deassert(atomic_t *deassert)
>> {
>> - while (!atomic_read(deassert));
>> + while (!atomic_read(deassert))
>> + cpu_relax();
>> return;
>> }
>
> For less-than-briliant people like me, it's totally non-obvious that
> cpu_relax() is needed for correctness here, not just to make P4 happy.
Not just P4 ... there are other threaded cpus where it is useful to
let the core know that this is a busy loop so it would be a good thing
to let other threads have priority.
Even on a non-threaded cpu the cpu_relax() could be useful in the
future to hint to the cpu that it could drop into a lower power
hogging state.
But I agree with your main point that the loop without the cpu_relax()
looks like it ought to work because atomic_read() ought to actually
go out and read memory each time around the loop.
-Tony
-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists