[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <x491ty71dox.fsf@segfault.boston.devel.redhat.com>
Date: Wed, 12 Mar 2014 14:20:30 -0400
From: Jeff Moyer <jmoyer@...hat.com>
To: Frank Mayhar <fmayhar@...gle.com>
Cc: Jens Axboe <axboe@...nel.dk>,
linux-kernel <linux-kernel@...r.kernel.org>,
"Martin K. Petersen" <martin.petersen@...cle.com>
Subject: Re: [PATCH] block: Force sector and nr_sects to device alignment and granularity.
Frank Mayhar <fmayhar@...gle.com> writes:
> On Tue, 2014-03-11 at 11:15 -0400, Jeff Moyer wrote:
>> Frank Mayhar <fmayhar@...gle.com> writes:
>>
>> > block: Force sector and nr_sects to device alignment and granularity.
>> >
>> > In blkdev_issue_discard(), rather than sending an improperly-
>> > aligned discard to the device (where it may get an error),
>> > adjust the start and length to the block device alignment and
>> > granularity. Don't fail if this leaves nothing to discard.
>> >
>> > Without this change, certain flash drivers can report invalid
>> > trim parameters (and will fail the command). Per tytso, "given
>> > that discards are advisory, any part of the storage stack is
>> > free to drop discard requests silently."
>>
>> And how do you get here with misaligned discards?
>
> I don't understand the question.
Sorry if it wasn't clear...
> The case that we were seeing was with an SSD that required TRIM on 8k
> boundaries and with an 8k granularity. Since the file system was trying
> to do discards based on 4k alignment the driver complained mightily.
but you managed to read my mind well enough. The question is how high
up the stack do you put the logic for this? Is it worth it to duplicate
the checks in the OS that are already done on the device? I don't
know. Martin, do you have an opinion on this?
-Jeff
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists