[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20101024211735.GB3137@amd>
Date: Mon, 25 Oct 2010 08:17:35 +1100
From: Nick Piggin <npiggin@...nel.dk>
To: Thomas Gleixner <tglx@...utronix.de>
Cc: Christoph Hellwig <hch@...radead.org>,
Nick Piggin <npiggin@...nel.dk>,
Andi Kleen <andi@...stfloor.org>,
Dave Chinner <david@...morbit.com>,
linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 11/18] fs: Introduce per-bucket inode hash locks
On Sun, Oct 24, 2010 at 05:44:24PM +0200, Thomas Gleixner wrote:
> On Tue, 19 Oct 2010, Christoph Hellwig wrote:
>
> > On Tue, Oct 19, 2010 at 06:00:57PM +1100, Nick Piggin wrote:
> > > But it is still "magic". Because you don't even know whether it
> > > is a spin or sleeping lock, let alone whether it is irq or bh safe.
> > > You get far more information seeing a bit_spin_lock(0, &hlist) call
> > > than hlist_lock().
>
> Errm, when hlist_lock() has proper documentation than it should not be
> rocket science to figure out what it does.
Right, a look at the docmentation and another layer of indirection
for a reader.
And it's not exactly "properly" documented. It doesn't say if it may
turn into a sleeping lock or is allowed to be used from irq or bh
context.
> And if you use bit 0 of hlist then you better have helper functions to
> access it anyway. We do that with other data types which (ab)use the
> lower two bits of pointers.
>
> > To get back a bit to the point:
> >
> > - we have a new bl_hlist sturcture which combines a hash list and a
> > lock embedded into the head
> > - the reason why we do it is to be able to use a bitlock
>
> And if you design that structure clever, then simple dereferencing of
> it (w/o casting magic) should make the compiler barf. So you are
> forced to use the helper functions.
>
> > Furthermore it allows the RT people to simply throw a mutex into the
> > head and everything keeps working without touching a sinlge line of
> > code outside of hlist_bl.h.
>
> Yes, please use proper helper functions. Having to change code is a
> horror for RT, when we can get away with a single change in a header
> file.
>
> Aside of RT there is another advantage of being able to change the
> lock implementation at a single place: you can change it to a real
> spinlock and have lockdep coverage of that code. I fundamentally hate
> bit_spin_locks for sneaking around lockdep.
You do not want to add a bloated mutex to each inode hash bucket and
think you can just dust off your hands and walk away. You would
probably make a smaller auxiliary hash of locks, sanely sized, and
protect it with that.
So it would be wrong to just bloat hlist_bl by a factor of several times
(how big is a mutex in -rt?) without doing anything else.
Although a sane locking macro and structure like I had, would perfectly
allow you to switch locks in a single place just the same.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists