[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAF1ivSbsH7JsduHE+a2d+BEbZL6Fo7uJWS2NCTTMNTSAUMt0UA@mail.gmail.com>
Date: Tue, 13 Oct 2015 10:44:11 -0700
From: Ming Lin <mlin@...nel.org>
To: Christoph Hellwig <hch@...radead.org>
Cc: lkml <linux-kernel@...r.kernel.org>, Jens Axboe <axboe@...nel.dk>,
Kent Overstreet <kent.overstreet@...il.com>,
Dongsu Park <dpark@...teo.net>,
Mike Snitzer <snitzer@...hat.com>,
"Martin K. Petersen" <martin.petersen@...cle.com>,
Ming Lin <ming.l@....samsung.com>,
linux-nvme@...ts.infradead.org
Subject: Re: [PATCH v6 05/11] block: remove split code in blkdev_issue_{discard,write_same}
On Tue, Oct 13, 2015 at 4:50 AM, Christoph Hellwig <hch@...radead.org> wrote:
> On Wed, Aug 12, 2015 at 12:07:15AM -0700, Ming Lin wrote:
>> From: Ming Lin <ming.l@....samsung.com>
>>
>> The split code in blkdev_issue_{discard,write_same} can go away
>> now that any driver that cares does the split. We have to make
>> sure bio size doesn't overflow.
>>
>> For discard, we set max discard sectors to (1<<31)>>9 to ensure
>> it doesn't overflow bi_size and hopefully it is of the proper
>> granularity as long as the granularity is a power of two.
>
> This ends up breaking discard on NVMe devices for a me. An mkfs.xfs
> which does a discard of the whole device now hangs the system.
> Something in here makes it send discard command that the device doesn't
> like and the aborts don't seem to help either, although that might be
> an issue with the abort handling in the driver.
>
> Just a heads up for now, once I get a bit more time I'll try to collect
> a blktrace to figure out how the commands sent to the driver look
> different before and after the patch.
I just did a quick test with a Samsung 900G NVMe device.
mkfs.xfs is OK on 4.3-rc5.
What's your device model? I may find a similar one to try.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists