lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 10 Apr 2009 10:39:47 +1000
From:	Bron Gondwana <brong@...tmail.fm>
To:	Ingo Molnar <mingo@...e.hu>
Cc:	Andi Kleen <andi@...stfloor.org>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Frederic Weisbecker <fweisbec@...il.com>,
	LKML <linux-kernel@...r.kernel.org>,
	Jeff Mahoney <jeffm@...e.com>,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	ReiserFS Development List <reiserfs-devel@...r.kernel.org>,
	Bron Gondwana <brong@...tmail.fm>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Alexander Viro <viro@...iv.linux.org.uk>
Subject: Re: [PATCH] reiserfs: kill-the-BKL

On Thu, Apr 09, 2009 at 11:17:33PM +0200, Ingo Molnar wrote:
> 
> * Andi Kleen <andi@...stfloor.org> wrote:
> 
> > > Using a mutex seems like the sane choice here. I'd advocate spinlocks 
> > > for a new filesystem any day (but even there it's a fine choice to have 
> > > a mutex, if top of the line scalability is not an issue).
> > > 
> > > But for a legacy filesystem like reiser3, which depended on the BKL 
> > 
> > reiser3 is much more widely used in the user base than a lot of 
> > "non legacy" file systems. It's very likely it has significantly 
> > more users than ext4 for example. Remember that it was the default 
> > file system for a major distribution until very recently. [...]
> 
> ( Drop the condescending tone please - i very much know that SuSE 
>   installed reiser3 by default for years. It is still a legacy 
>   filesystem and no new development has gone into it for years. )

legacy (adj) — A pejorative term used in the computer industry meaning
"it works"

http://homepages.tesco.net/~J.deBoynePollard/FGA/legacy-is-not-a-pejorative.html

It's by far the best of all the currently available and stable
filesystems for our workload (big honking Cyrus IMAP boxes)

There have been bugfixes and occasional bits and pieces over
the years.  We applied a couple of patches for a while until
they were accepted upstream only a couple of years ago.

It doesn't get much new development because, gee, "it works".
Some people like their filesystem to keep just working in a
predictable way.

I would be very concerned if people though it was OK to break
it just because these shiny new ext4 and btrfs filesystem which
are _NOT_YET_READY_ had replaced it.

Ta.
 
> > [...] I also got a few reiser3 fs still around, it tended to 
> > perform very well on kernel hacker workloads.
> 
> Then i am sure you must like this patch: it introduces a per 
> superblock lock, splitting up the big BKL serialization. You
> totally failed to even acknowledge that advantage, maybe you
> missed that aspect?
> 
> For example, if you have /home and / on separate reiser3 
> filesystems, you could see as much as a 200% jump in performance 
> straight away on certain workloads, on a dual-core box.
> 
> That big BKL overhead is a real reiser3 scalability problem - 
> especially on reiser3 using servers which are likely to have several 
> filesystems on the same box.

Yes - I'm certainly interested in that.

That said, we have a box with 83 reiserfs partitions on it, and
which is constrained by IO (main servers really don't need much
CPU).  Performance is pretty good even now.

So - I'm interested in this patch series, but not at the expense
of making reiserfs any less stable.  Our customers, funnily enough,
like it when our service is stable!

Thanks,

Bron.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists