lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20241023052403.GC1341@lst.de>
Date: Wed, 23 Oct 2024 07:24:03 +0200
From: Christoph Hellwig <hch@....de>
To: Keith Busch <kbusch@...nel.org>
Cc: Abhishek Bapat <abhishekbapat@...gle.com>, Jens Axboe <axboe@...nel.dk>,
	Christoph Hellwig <hch@....de>, Sagi Grimberg <sagi@...mberg.me>,
	Prashant Malani <pmalani@...gle.com>,
	linux-nvme@...ts.infradead.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] nvme-sysfs: display max_hw_sectors_kb without
 requiring namespaces

On Tue, Oct 22, 2024 at 08:53:47AM -0600, Keith Busch wrote:
> You'd may want to know max_sectors_kb, dma_alignment, nr_requests,
> virt_boundary_mask. Maybe some others.
> 
> The request_queue is owned by the block layer, so that seems like an
> okay place to export it, but attached to some other device's sysfs
> directory instead of a gendisk.
> 
> I'm just suggesting this because it doesn't sound like this is an nvme
> specific problem.

Well, it's a problem specific to passthrough without a gendisk, which is
the NVMe admin queue and the /dev/sg device.  So it's common-ish :)


Note that for the programs using passthrough sysfs isn't actually a very
good interface, as finding the right directory is pain, as is opening,
reading and parsing one ASCIII file per limit.

One thing I've been wanting to do also for mkfs tools and similar is a
generic extensible ioctl to dump all the queue limits.  That's a lot
easier and faster for the tools and would work very well here.

Note that we could still be adding new limits at any point of time
(although I have a hard time thinking what limit we don't have yet),
so we still can't guarantee that non-trivial I/O will always work.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ