[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20101025190221.GC7206@atomide.com>
Date: Mon, 25 Oct 2010 12:02:21 -0700
From: Tony Lindgren <tony@...mide.com>
To: Ohad Ben-Cohen <ohad@...ery.com>
Cc: Kevin Hilman <khilman@...prootsystems.com>,
Balaji T K <balajitk@...com>,
"Kamat, Nishant" <nskamat@...com>, linux-omap@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-arm-kernel@...ts.infradead.org,
akpm@...ux-foundation.org, Greg KH <greg@...ah.com>,
Benoit Cousson <b-cousson@...com>,
Grant Likely <grant.likely@...retlab.ca>,
Hari Kanigeri <h-kanigeri2@...com>, Suman Anna <s-anna@...com>
Subject: Re: [PATCH 3/3] omap: add hwspinlock device
* Ohad Ben-Cohen <ohad@...ery.com> [101024 10:45]:
> Hi Tony,
>
> On Fri, Oct 22, 2010 at 6:56 PM, Tony Lindgren <tony@...mide.com> wrote:
> > Guys, let's try to come up with a generic interface for this instead of
> > polluting the device drivers with more omap specific interfaces.
> >
> > We really want to keep the drivers generic and platform independent.
>
> I share this concern as well.
>
> We basically have three options here:
>
> 1. Have the hwspinlock driver inside the omap folders and use pdata func ptrs
> 2. Have a generic hwspinlock framework, with a small omap-specific
> implementation
> 3. Have a misc driver which is currently omap specific, but can very
> easily be converted to a common framework
>
> I don't like (1) so much; it's a driver that has very little that is
> hardware specific (mainly just the two lines that actually access the
> hardware - the lock and the unlock), and it's growing. E.g., we will
> need to add it a user space interface (to allow userland IPC
> implementations), we will need to add it a "virtual" locks layer that
> would provide more locks than the hardware currently has, etc..
>
> In addition, there seem to be a general discontent about drivers
> piling up in the ARM folders, instead of having them visible in
> drivers/ so they can become useful for other platforms as well.
>
> Here's something Linus wrote about this awhile back:
>
> http://www.spinics.net/lists/linux-arm-msm/msg00324.html
>
> So initially I opted for (2). I have an implementation ready - it's a
> common drivers/hwspinlock/core.c framework with a small omap-specific
> part at drivers/hwspinlock/omap_hwspinlock.c. The core has all the
> logic while the latter, omap-specific implementation, is really small
> - it is basically just registering lock instances, that have a
> trylock/unlock/relax ops struct, with the common framework.
>
> But lack of additional users (besides the OMAP hardware), and lack of
> generic drivers that would use it (we have syslink, which currently
> tend to only need this from user space, i2c-omap and omap's upcoming
> resource manager), made it look like an overkill for now.
>
> So simplicity won, and (3) was eventually submitted. It exposes
> exactly the same interface like (2) would (s/omap_hwspin/hwspin/), and
> it's relatively easy to turn it back into a common framework in case
> additional users show up.
>
> But if you feel that (2) is justifiable/desirable, I would be more
> than happy to submit that version.
Yes (2) please. I would assume there will be more use of this. And then
we (or probably me again!) don't have to deal with cleaning up the drivers
again in the future.
> > Unless somebody has better ideas, I suggest we pass a lock function
> > in the platform_data to the drivers that need it, and do the omap
> > specific nasty stuff in the platform code.
>
> Do you mean (1) and then pass the whole bunch of APIs
> (request/free/lock variants/unlock/get_id) via pdata ?
>
> Or do you mean a variation of (2) with only the specific locking bits
> coming from pdata func pointers ? I guess that in this case we just
> might as well go with the full (2).
Yes variation of (2) where you only pass the locking function via
platform data would be best.
Regards,
Tony
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists