[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20101119153426.GA25655@infradead.org>
Date: Fri, 19 Nov 2010 10:34:26 -0500
From: Christoph Hellwig <hch@...radead.org>
To: Mark Lord <kernel@...savvy.com>
Cc: Christoph Hellwig <hch@...radead.org>,
Greg Freemyer <greg.freemyer@...il.com>,
"Martin K. Petersen" <martin.petersen@...cle.com>,
James Bottomley <James.Bottomley@...e.de>,
Jeff Moyer <jmoyer@...hat.com>,
Matthew Wilcox <matthew@....cx>,
Josef Bacik <josef@...hat.com>,
Lukas Czerner <lczerner@...hat.com>, tytso@....edu,
linux-ext4@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-fsdevel@...r.kernel.org, sandeen@...hat.com
Subject: Re: [PATCH 1/2] fs: Do not dispatch FITRIM through separate
super_operation
On Fri, Nov 19, 2010 at 10:24:52AM -0500, Mark Lord wrote:
> I wonder if this can be treated more like how SG_IO does things?
> The SG_IO mechanism seems to have no issues passing through stuff
> like this, so perhaps we could implement something in a similar fashion?
We actually sent discards down as BLOCK_PC commands, which basically
is SG_IO style I/O from kernelspace. But that caused a lot of problems
with scsi error handling so we moved away from it. But even when
you do a BLOCK_PC UNMAP command that later gets translated to a TRIM
by libata you have a few issues:
- you need to get partion offsets in the I/O sumitter to include them
in every range. That's doable but a quite nasty layering violation,
and it also prevents us from ever adding DM/MD support to that
scheme.
- you'll have to allocate new memory for the TRIM payload in libata,
and switch the libata command to use it, instead of the current
hack to reuse the zeroed page sent down with the WRITE SAME command.
I tried to get the payload switching in libata to work a few times,
but never managed to get it right.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists