[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20071011205641.GA3864@ucw.cz>
Date: Thu, 11 Oct 2007 20:56:42 +0000
From: Pavel Machek <pavel@....cz>
To: Nick Piggin <nickpiggin@...oo.com.au>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
linux-kernel@...r.kernel.org
Subject: Re: [patch 1/5] wait: use lock bitops for __wait_on_bit_lock
Hi!
> > Sorry, I'm just not going to apply a patch like that.
> >
> > I mean, how the heck is anyone else supposed to understand what you're up
> > to?
>
> Hmm, I might just withdraw this patch 1/5. This is generally a slowpath,
> and it's hard to guarantee that any exported user doesn't rely on the
> full barrier here (not that they would know whether they do or not, let
> alone document the fact).
>
> I think all the others are pretty clear, though? (read on if no)
...
> > There's a bit of documentation in Documentation/atomic_ops.txt
> > (probably enough, I guess) but it is totally unobvious to 98.3% of kernel
> > developers when they should use test_and_set_bit() versus
> > test_and_set_bit_lock() and it is far too much work to work out why it was
> > used in __wait_on_bit_lock(), whether it is correct, what advantages it
> > brings and whether and where others should emulate it.
>
> If you set a bit for the purpose of opening a critical section, then
> you can use this. And clear_bit_unlock to close it.
>
> The advantages are that it is faster, and the hapless driver writer
> doesn't have to butcher or forget about doing the correct
> smp_mb__before_clear_bit(), or have reviewers wondering exactly WTF
> that barrier is for, etc.
Actually, I'd expect *_lock() ro be slower than *()...
> Basically, it is faster, harder to get wrong, and more self-docuemnting.
So I'd not call it self-documenting.
--
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists