lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20090828202913.GA18233@infradead.org>
Date:	Fri, 28 Aug 2009 16:29:13 -0400
From:	Christoph Hellwig <hch@...radead.org>
To:	Jens Axboe <jens.axboe@...cle.com>
Cc:	Christoph Hellwig <hch@...radead.org>,
	linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org,
	chris.mason@...cle.com, david@...morbit.com,
	akpm@...ux-foundation.org, jack@...e.cz,
	yanmin_zhang@...ux.intel.com, richard@....demon.co.uk,
	damien.wyart@...e.fr, fweisbec@...il.com, Alan.Brunelle@...com
Subject: Re: [PATCH 1/9] writeback: move dirty inodes from super_block to
	backing_dev_info

On Wed, Aug 12, 2009 at 06:12:50PM +0200, Jens Axboe wrote:
> OK, so you'd prefer leaving the super block lists in place and rather
> have the super blocks hanging off the bdi?

That would solve the above problem.  It would also implicitly provide
increased locality by always writing batches of dirty inodes per fs.

> What about file systems that
> support more than one block device per mount, like btrfs?

Or XFS :)

> Can we assume
> that they will forever provide a single bdi backing? btrfs currently has
> this, just wondering about future implications.

I don't see any point to assume things are forever.  For making progress
on this and getting it merged in .32 making that assumption is a good
one IMHO.

Now the question about that to do with a filesystem on multiple actual
backing device is an interesting one.  What about the case of having
btrfs just one half of two disks each?  Or same with a "normal" fs
ontop of LVM/MD?  Maybe in the end one thread(-pool) per filesystem
and not just per backing dev is the way forward, with the threads
schedule so that they don't interfer if they operate on the same
backing dev?


> 
> -- 
> Jens Axboe
> 
---end quoted text---
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ