[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8a1e78e9-c064-4fce-9ab4-f2beea053d97@grimberg.me>
Date: Wed, 23 Oct 2024 12:46:21 +0300
From: Sagi Grimberg <sagi@...mberg.me>
To: Christoph Hellwig <hch@....de>, Keith Busch <kbusch@...nel.org>
Cc: Abhishek Bapat <abhishekbapat@...gle.com>, Jens Axboe <axboe@...nel.dk>,
Prashant Malani <pmalani@...gle.com>, linux-nvme@...ts.infradead.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] nvme-sysfs: display max_hw_sectors_kb without requiring
namespaces
On 23/10/2024 8:24, Christoph Hellwig wrote:
> On Tue, Oct 22, 2024 at 08:53:47AM -0600, Keith Busch wrote:
>> You'd may want to know max_sectors_kb, dma_alignment, nr_requests,
>> virt_boundary_mask. Maybe some others.
>>
>> The request_queue is owned by the block layer, so that seems like an
>> okay place to export it, but attached to some other device's sysfs
>> directory instead of a gendisk.
>>
>> I'm just suggesting this because it doesn't sound like this is an nvme
>> specific problem.
> Well, it's a problem specific to passthrough without a gendisk, which is
> the NVMe admin queue and the /dev/sg device. So it's common-ish :)
>
>
> Note that for the programs using passthrough sysfs isn't actually a very
> good interface, as finding the right directory is pain, as is opening,
> reading and parsing one ASCIII file per limit.
>
> One thing I've been wanting to do also for mkfs tools and similar is a
> generic extensible ioctl to dump all the queue limits. That's a lot
> easier and faster for the tools and would work very well here.
>
> Note that we could still be adding new limits at any point of time
> (although I have a hard time thinking what limit we don't have yet),
> so we still can't guarantee that non-trivial I/O will always work.
Makes sense to me. Although people would still like to be able to
see this value outside of an application context. We can probably
extend nvme-cli to display this info...
Powered by blists - more mailing lists