[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20080118011542.GQ155259@sgi.com>
Date: Fri, 18 Jan 2008 12:15:42 +1100
From: David Chinner <dgc@....com>
To: Valerie Henson <val@...consulting.com>
Cc: linux-fsdevel@...r.kernel.org, linux-ext4@...r.kernel.org,
linux-kernel@...r.kernel.org, "Theodore Ts'o" <tytso@....edu>,
Andreas Dilger <adilger@...sterfs.com>,
Ric Wheeler <ric@....com>
Subject: Re: [RFC] Parallelize IO for e2fsck
On Wed, Jan 16, 2008 at 01:30:43PM -0800, Valerie Henson wrote:
> Hi y'all,
>
> This is a request for comments on the rewrite of the e2fsck IO
> parallelization patches I sent out a few months ago. The mechanism is
> totally different. Previously IO was parallelized by issuing IOs from
> multiple threads; now a single thread issues fadvise(WILLNEED) and
> then uses read() to complete the IO.
Interesting.
We ultimately rejected a similar patch to xfs_repair (pre-population
the kernel block device cache) mainly because of low memory
performance issues and it doesn't really enable you to do anything
particularly smart with optimising I/O patterns for larger, high
performance RAID arrays.
The low memory problems were particularly bad; the readahead
thrashing cause a slowdown of 2-3x compared to the baseline and
often it was due to the repair process requiring all of memory
to cache stuff it would need later. IIRC, multi-terabyte ext3
filesystems have similar memory usage problems to XFS, so there's
a good chance that this patch will see the same sorts of issues.
> Single disk performance doesn't change, but elapsed time drops by
> about 50% on a big RAID-5 box. Passes 1 and 2 are parallelized. Pass
> 5 is left as an exercise for the reader.
Promising results, though....
Cheers,
Dave.
--
Dave Chinner
Principal Engineer
SGI Australian Software Group
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists