[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20061024135928.GB11034@melbourne.sgi.com>
Date: Tue, 24 Oct 2006 23:59:28 +1000
From: David Chinner <dgc@....com>
To: Jeff Garzik <jeff@...zik.org>
Cc: Alex Tomas <alex@...sterfs.com>, Theodore Tso <tytso@....edu>,
Jan Kara <jack@...e.cz>, linux-fsdevel@...r.kernel.org,
linux-ext4@...r.kernel.org
Subject: Re: [RFC] Ext3 online defrag
On Tue, Oct 24, 2006 at 12:14:33AM -0400, Jeff Garzik wrote:
> On Mon, Oct 23, 2006 at 06:31:40PM +0400, Alex Tomas wrote:
> > isn't that a kernel responsbility to find/allocate target blocks?
> > wouldn't it better to specify desirable target group and minimal
> > acceptable chunk of free blocks?
>
> The kernel doesn't have enough knowledge to know whether or not the
> defragger prefers one blkdev location over another.
>
> When you are trying to consolidate blocks, you must specify the
> destination as well as source blocks.
>
> Certainly, to prevent corruption and other nastiness, you must fail if
> the destination isn't available...
That's the wrong way to look at it. if you want the userspace
process to specify a location, then you should preallocate it first
before doing anything else. There is no need to clutter a simple
data mover interface with all sorts of unnecessary error handling.
Once you've separated the destination allocation from the data
mover, the mover is basically a splice copy from source to
destination, an fsync and then an atomic swap blocks/extents operation.
Most of this code is generic, and a per-fs swap-extents vector
could be easily provided for the one bit that is not....
The allocation interface, OTOH, is anything but simple and is really
a filesystem specific interface. Seems logical to me to separate
the two.
Cheers,
Dave.
--
Dave Chinner
Principal Engineer
SGI Australian Software Group
-
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists