[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <fbc19fb1-5e2e-ff39-5295-f38195fb8d7c@gmail.com>
Date: Thu, 18 Apr 2019 22:33:35 +0900
From: Minwoo Im <minwoo.im.dev@...il.com>
To: Aaron Ma <aaron.ma@...onical.com>,
Maxim Levitsky <mlevitsk@...hat.com>,
linux-kernel@...r.kernel.org, linux-nvme@...ts.infradead.org,
keith.busch@...el.com, axboe@...com
Subject: Re: [PATCH] nvme: determine the number of IO queues
On 4/18/19 9:52 PM, Aaron Ma wrote:
>
>
> On 4/18/19 8:13 PM, Minwoo Im wrote:
>>> Yes the IO queues number is 0's based, but driver would return error and
>>> remove the nvme device as dead.
>>
>> IMHO, if a controller indicates an error with this set_feature command,
>> then
>> we need to figure out why the controller was returning the error to host.
>>
>> If you really want to use at least a single queue to see an alive I/O
>> queue,
>> controller should not return the error because as you mentioned above,
>> NCQA, NSQA will be returned as 0-based. If an error is there, that could
>> mean that controller may not able to provide even a single queue for I/O.
>
> I was thinking about try to set 1 I/O queue in driver to try to probe
> NVME device.
> If it works, at least system can bootup to debug instead of just remove
> NVME device and kernel boot hang at loading rootfs.
If the controller returns error for that command, how can we assure that
the controller would support a single I/O queue ?
>
> If you still concern this 1 I/O queue I can still set it as
> *count = 0;
>
> At least we try all count, NVME device still failed to respond.
>
> Regards,
> Aaron
>
>>
>> Thanks,
>> Minwoo Im
Powered by blists - more mailing lists