[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20071113124656.3c333bdd.akpm@linux-foundation.org>
Date: Tue, 13 Nov 2007 12:46:56 -0800
From: Andrew Morton <akpm@...ux-foundation.org>
To: David Brownell <david-b@...bell.net>
Cc: Ingo Molnar <mingo@...e.hu>,
Linux Kernel list <linux-kernel@...r.kernel.org>,
Florian Fainelli <florian.fainelli@...ecomint.eu>,
Haavard Skinnemoen <hskinnemoen@...el.com>,
Nick Piggin <nickpiggin@...oo.com.au>
Subject: Re: [patch 2.6.24-rc2 1/3] generic gpio -- gpio_chip support
On Tue, 13 Nov 2007 11:22:45 -0800 David Brownell <david-b@...bell.net> wrote:
> On Tuesday 13 November 2007, Ingo Molnar wrote:
> >
> > * David Brownell <david-b@...bell.net> wrote:
> >
> > > > > I speculate that either the design has changed (without fanfare),
> > > > > or else that stuff is in RT kernels and has not yet gone upstream.
> > > >
> > > > Well whatever. We shouldn't have to resort to caller-side party
> > > > tricks like this to get acceptable performance.
> > >
> > > I'd be happy if, as originally presented, it were possible to just
> > > pass a raw_spinlock_t to spin_lock_irqsave() and friends.
> >
> > that's a spinlock type abstraction of PREEMPT_RT, not of mainline.
>
> Any reason that stuff shouldn't move into mainline?
>
>
> > Why do you want to use raw_spinlock_t?
>
> Already answered elsewhere in this thread ...
Can't say I really understood the answer. I don't think we actually know
where all of this extra time is being spent?
> I'll highlight the
> point that such bitops shouldn't be preemption points.
Disagree. *everything* should be a preemption point. For
internal-implementation details we do need to disable preemtion sometimes
(to prevent deadlocks and to protect per-cpu resources). But those
preemption-off periods should be minimised.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists