[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <200711140645.26363.nickpiggin@yahoo.com.au>
Date: Wed, 14 Nov 2007 06:45:26 +1100
From: Nick Piggin <nickpiggin@...oo.com.au>
To: David Brownell <david-b@...bell.net>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Ingo Molnar <mingo@...e.hu>,
Linux Kernel list <linux-kernel@...r.kernel.org>,
Florian Fainelli <florian.fainelli@...ecomint.eu>,
Haavard Skinnemoen <hskinnemoen@...el.com>
Subject: Re: [patch 2.6.24-rc2 1/3] generic gpio -- gpio_chip support
On Wednesday 14 November 2007 17:52, David Brownell wrote:
> On Tuesday 13 November 2007, Andrew Morton wrote:
> > > I'll highlight the
> > > point that such bitops shouldn't be preemption points.
> >
> > Disagree. *everything* should be a preemption point.
>
> So it's wrong that <asm-generic/bitops/atomic.h> uses the
> same calls to prevent *those* bitops from being prempted?
Upstream, all spinlocks prevent preemption. But these ones
are raw locks rather than normal locks probably because that
they are trivially an innermost and correct lock. It's not
going to be very helpful to bloat it and slow it down when
debugging is turned on (atomic operations are _very_ frequent).
For -rt, who knows. They probably don't even run on parisc
or sparc so don't really care at this point.
> Thus, that code should switch over to normal spinlocks...
For upstream, there is little reason to switch them over, and
some reasons not to.
> I believe that if I submitted patches to do that, there'd
> be a not-so-small riot. And the arguments would all boil
> down to much the same ones applying to *these* bitops...
I don't think you have valid reasons. Also, core/arch code has
some different considerations to driver code.
> > For internal-implementation
> > details we do need to disable preemtion sometimes
> > (to prevent deadlocks and to protect per-cpu resources).
>
> You're certainly talking about "internal-implementation
> details" in this case. It's not like the lock is used
> outside of those routines. Or as if other implementations
> would even *need* such a lock. (Just like the IRQ table,
> if the entries can't be removed and are all set up very
> early, locking would be pointless.)
Internal implementation details, as in: spin lock code has to
disable preemption otherwise they deadlock; get_cpu_var() has
to disable preemption to give coherent access to the variable
etc.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists