lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a2f0a97e14b884dc1183db9cbdd6f4b520000ce3.camel@redhat.com>
Date:   Wed, 17 Apr 2019 20:32:04 +0300
From:   Maxim Levitsky <mlevitsk@...hat.com>
To:     Aaron Ma <aaron.ma@...onical.com>, linux-kernel@...r.kernel.org,
        linux-nvme@...ts.infradead.org, keith.busch@...el.com, axboe@...com
Subject: Re: [PATCH] nvme: determine the number of IO queues

On Wed, 2019-04-17 at 22:12 +0800, Aaron Ma wrote:
> Some controllers support limited IO queues, when over set
> the number, it will return invalid field error.
> Then NVME will be removed by driver.
> 
> Find the max number of IO queues that controller supports.
> When it still got invalid result, set 1 IO queue at least to
> bring NVME online.

To be honest a spec compliant device should not need this.
The spec states:

"Number of I/O Completion Queues Requested (NCQR): Indicates the number of I/O
Completion
Queues requested by software. This number does not include the Admin Completion
Queue. A
minimum of one queue shall be requested, reflecting that the minimum support is
for one I/O
Completion Queue. This is a 0’s based value. The maximum value that may be
specified is 65,534
(i.e., 65,535 I/O Completion Queues). If the value specified is 65,535, the
controller should return
an error of Invalid Field in Command."


This implies that you can ask for any value and the controller must not respond
with an error, but rather indicate how many queues it supports.

Maybe its better to add a quirk for the broken device, which needs this?

Best regards,
	Maxim Levitsky

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ