lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20241018051410.GE19831@lst.de>
Date: Fri, 18 Oct 2024 07:14:10 +0200
From: Christoph Hellwig <hch@....de>
To: Keith Busch <kbusch@...nel.org>
Cc: Abhishek Bapat <abhishekbapat@...gle.com>, Jens Axboe <axboe@...nel.dk>,
	Christoph Hellwig <hch@....de>, Sagi Grimberg <sagi@...mberg.me>,
	Prashant Malani <pmalani@...gle.com>,
	linux-nvme@...ts.infradead.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] nvme-sysfs: display max_hw_sectors_kb without
 requiring namespaces

On Thu, Oct 17, 2024 at 10:40:36AM -0600, Keith Busch wrote:
> On Wed, Oct 16, 2024 at 09:31:08PM +0000, Abhishek Bapat wrote:
> > max_hw_sectors based on DMA optimized limitation") introduced a
> > limitation on the value of max_hw_sectors_kb, restricting it to 128KiB
> > (MDTS = 5). This restricion was implemented to mitigate lockups
> > encountered in high-core count AMD servers.
> 
> There are other limits that can constrain transfer sizes below the
> device's MDTS. For example, the driver can only preallocate so much
> space for DMA and SGL descriptors, so 8MB is the current max transfer
> sizes the driver can support, and a device's MDTS can be much bigger
> than that.

Yes.  Plus the virt boundary for PRPs, and for non-PCIe tranfers
there's also plenty of other hardware limits due to e.g. the FC HBA
and the RDMA HCA limit.  There's also been some talk of a new PCIe
SGL variant with hard limits.

So I agree that exposting limits on I/O would be very useful, but it's
also kinda non-trivial.

> Anyway, yeah, I guess having a controller generic way to export this
> sounds like a good idea, but I wonder if the nvme driver is the right
> place to do it. The request_queue has all the limits you need to know
> about, but these are only exported if a gendisk is attached to it.
> Maybe we can create a queue subdirectory to the char dev too. 

If we want it controller wide to e.g. include the admin queue the
gendisk won't really help unfortunately.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ