[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CACVXFVM-OUFv+gUnNcGt86FhS1BABV4sQxhCqPf20KKZTwNbyQ@mail.gmail.com>
Date: Mon, 9 Jun 2014 15:53:29 +0800
From: Ming Lei <tom.leiming@...il.com>
To: Matias Bjørling <m@...rling.me>
Cc: Matthew Wilcox <willy@...ux.intel.com>,
Keith Busch <keith.busch@...el.com>,
"Sam Bradshaw (sbradshaw)" <sbradshaw@...ron.com>,
Jens Axboe <axboe@...com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
linux-nvme <linux-nvme@...ts.infradead.org>
Subject: Re: [PATCH v6] NVMe: conversion to blk-mq
On Mon, Jun 9, 2014 at 3:50 PM, Ming Lei <tom.leiming@...il.com> wrote:
> On Mon, Jun 9, 2014 at 2:00 PM, Matias Bjørling <m@...rling.me> wrote:
>> On Mon, Jun 9, 2014 at 6:35 AM, Ming Lei <tom.leiming@...il.com> wrote:
>>> On Fri, Jun 6, 2014 at 8:20 PM, Matias Bjørling <m@...rling.me> wrote:
>>>> This converts the current NVMe driver to utilize the blk-mq layer.
>>>
>>> Looks it can't be applied cleanly against 3.15-rc8 + Jens's for-linux
>>> branch, when I fix the conflict manually, below failure is triggered:
>>>
>>> [ 487.696057] nvme 0000:00:07.0: Cancelling I/O 202 QID 1
>>> [ 487.699005] nvme 0000:00:07.0: Aborting I/O 202 QID 1
>>> [ 487.704074] nvme 0000:00:07.0: Cancelling I/O 202 QID 1
>>> [ 487.717881] nvme 0000:00:07.0: Aborting I/O 202 QID 1
>>> [ 487.736093] end_request: I/O error, dev nvme0n1, sector 91532352
>>> [ 487.747378] nvme 0000:00:07.0: completed id 0 twice on queue 0
>>>
>>>
>>> when running fio randread(libaio, iodepth:64) with more than 3 jobs.
>>>
>>> And looks no such failure when jobs is 1 or 2.
>>
>> Can you try with the nvmemq_review branch at
>>
>> https://github.com/MatiasBjorling/linux-collab.git
>
> Looks git-pull from the branch does work, so you
> might have out-of-tree patches.
After pulling from your tree, the problem still persists.
I test nvme over qemu, and both linus/next tree can
work well with qemu nvme.
Thanks,
--
Ming Lei
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists