[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <aEcJ96o3Jue_g1XM@kbusch-mbp>
Date: Mon, 9 Jun 2025 10:21:11 -0600
From: Keith Busch <kbusch@...nel.org>
To: Bitao Hu <yaoma@...ux.alibaba.com>
Cc: axboe@...nel.dk, hch@....de, sagi@...mberg.me,
linux-nvme@...ts.infradead.org, linux-kernel@...r.kernel.org,
kanie@...ux.alibaba.com
Subject: Re: [PATCH] nvme: Support per-device timeout settings
On Fri, May 30, 2025 at 03:31:21PM +0800, Bitao Hu wrote:
> The current 'admin_timeout' and 'io_timeout' parameters in
> the NVMe driver are global, meaning they apply to all NVMe
> devices in the system. However, in certain scenarios, it is
> necessary to set separate timeout values for different
> types of NVMe devices.
>
> To address this requirement, we propose adding two new fields,
> 'admin_timeout' and 'io_timeout', to the sysfs interface for
> each NVMe device. By default, these values will be consistent
> with the global parameters. If a user sets these values
> individually for a specific device, the user-defined values
> will take precedence.
>
> Usage example:
> To set admin_timeout=100 and io_timeout=50 for the NVMe device nvme1,
> use the following commands:
>
> echo 100 > /sys/class/nvme/nvme1/admin_timeout
> echo 50 > /sys/class/nvme/nvme1/io_timeout
We can already modify the io timeout using the block device's attribute.
If you want 50 seconds on all nvme namespaces attached to nvme1, you
could do this today:
echo 50000 | tee /sys/class/nvme/nvme1/nvme*n*/queue/io_timeout
We don't have a good way to do that on the admin queue, but I'm not sure
if there's a strong need for it: all the driver initiated commands
should be very fast for any device (they're just identifies and logs),
so a module wide parameter should be good enough for that.
Any long running admin command is almost certainly coming from user
space using the passthrough interface, and you can already specify the
desired timeout for that specific command.
Powered by blists - more mailing lists