lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090812161838.GL12579@kernel.dk>
Date:	Wed, 12 Aug 2009 18:18:39 +0200
From:	Jens Axboe <jens.axboe@...cle.com>
To:	Christoph Hellwig <hch@...radead.org>
Cc:	linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org,
	chris.mason@...cle.com, david@...morbit.com,
	akpm@...ux-foundation.org, jack@...e.cz,
	yanmin_zhang@...ux.intel.com, richard@....demon.co.uk,
	damien.wyart@...e.fr, fweisbec@...il.com, Alan.Brunelle@...com
Subject: Re: [PATCH 1/9] writeback: move dirty inodes from super_block to
	backing_dev_info

On Wed, Aug 12 2009, Jens Axboe wrote:
> On Thu, Aug 06 2009, Christoph Hellwig wrote:
> > On Thu, Jul 30, 2009 at 11:23:56PM +0200, Jens Axboe wrote:
> > > This is a first step at introducing per-bdi flusher threads. We should
> > > have no change in behaviour, although sb_has_dirty_inodes() is now
> > > ridiculously expensive, as there's no easy way to answer that question.
> > > Not a huge problem, since it'll be deleted in subsequent patches.
> > 
> > Looking at this again and again I don't really like this at all. What
> > is the problem with having per-bdi flushing threads that just iterate
> > a list of superblocks per-bdi and then the inodes from there?  That
> > would keep a lot of the calling conventions much more logical, as we
> > have to writeback data per-sb for all data integrity and some other
> > writes.
> 
> OK, so you'd prefer leaving the super block lists in place and rather
> have the super blocks hanging off the bdi? What about file systems that
> support more than one block device per mount, like btrfs? Can we assume
> that they will forever provide a single bdi backing? btrfs currently has
> this, just wondering about future implications.

Another issue with that approach is that you then need some logic to
decide which lists to do first, how much, etc. A single list is nicely
time ordered and retains our current approach, at least on a per-sb per
device level.

-- 
Jens Axboe

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ