[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090409184022.GA2665@elte.hu>
Date: Thu, 9 Apr 2009 20:40:22 +0200
From: Ingo Molnar <mingo@...e.hu>
To: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Andi Kleen <andi@...stfloor.org>,
Frederic Weisbecker <fweisbec@...il.com>,
LKML <linux-kernel@...r.kernel.org>,
Jeff Mahoney <jeffm@...e.com>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
ReiserFS Development List <reiserfs-devel@...r.kernel.org>,
Bron Gondwana <brong@...tmail.fm>,
Andrew Morton <akpm@...ux-foundation.org>,
Alexander Viro <viro@...iv.linux.org.uk>
Subject: Re: [PATCH] reiserfs: kill-the-BKL
* Linus Torvalds <torvalds@...ux-foundation.org> wrote:
> > Better would be to use spinlocks if possible. I guess you just would
> > need to find all sleep points and wrap them with lock dropping?
>
> I do agree that a filesystem should try to avoid sleeping locks if at
> all possible, especially on the paths that the VM uses for writeback.
> But on the other hand, I think the issue with reiserfs is just the bad
> latencies that the BKL can cause, and then it doesn't matter.
The main motivator is the tip:core/kill-the-BKL tree: we are working on
removing the BKL from all of the kernel, once and forever. We are
actually quite close to that end goal: reiser3 was the last big
stumbling block and it's great that Frederic is tackling that.
Using a mutex seems like the sane choice here. I'd advocate spinlocks
for a new filesystem any day (but even there it's a fine choice to have
a mutex, if top of the line scalability is not an issue).
But for a legacy filesystem like reiser3, which depended on the BKL
auto-dropping on schedule() it would be rather fragile to use spinlocks,
and it would take forever to validate the result. Just one codepath
missed with having some rare scheduling possibility and we'd have a
kernel crash down the road.
Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists