[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100616211645.GC30005@fluff.org.uk>
Date: Wed, 16 Jun 2010 22:16:45 +0100
From: Ben Dooks <ben-linux@...ff.org>
To: Uwe Kleine-K?nig <u.kleine-koenig@...gutronix.de>
Cc: Lothar Wa?mann <LW@...O-electronics.de>,
Benjamin Herrenschmidt <benh@...nel.crashing.org>,
Jeremy Kerr <jeremy.kerr@...onical.com>,
Ben Dooks <ben-linux@...ff.org>, linux-kernel@...r.kernel.org,
linux-arm-kernel@...ts.infradead.org
Subject: Re: [RFC,PATCH 1/2] Add a common struct clk
On Mon, Jun 14, 2010 at 11:43:10AM +0200, Uwe Kleine-K?nig wrote:
> On Mon, Jun 14, 2010 at 11:30:08AM +0200, Lothar Wa?mann wrote:
> > Benjamin Herrenschmidt writes:
> > > On Mon, 2010-06-14 at 08:39 +0200, Lothar Wa?mann wrote:
> > > > All implementations so far use spin_lock_irq_save()!
> > >
> > > Nothing prevents your implementation to be a tad smarter.
> > >
> > I vote for consistency, so that device drivers can be kept arch
> > independent instead of having to care about implentation details of
> > each arch.
> Back when I implemented clock support for ns9xxx (unfortunately not in
> mainline) I tried with a spinlock first and later switched to a mutex.
> IIRC the reason was that on ns9215 enabling the rtc clock took long
> (don't remember a number) and successfull enabling was signaled by an
> irq. So I would have had to implement irq polling in the clock code.
Ok, you could have implemented a lock ot update the state, then had
some form of wake-queue to wake up the task once it did.
> I think you can find different examples that make both possiblities bad.
> All in all I think that a sleeping clock implementation is preferable as
> it improves (general) latencies.
It may be that we need to do a bit of work on some of the drivers to
ensure that they don't fully re-set their clocks until they are able
to sleep.
--
Ben (ben@...ff.org, http://www.fluff.org/)
'a smiley only costs 4 bytes'
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists