lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6b00d25e-fe6a-4552-9945-d6181af83137@grimberg.me>
Date: Tue, 22 Oct 2024 18:35:11 +0300
From: Sagi Grimberg <sagi@...mberg.me>
To: Keith Busch <kbusch@...nel.org>, Abhishek Bapat <abhishekbapat@...gle.com>
Cc: Jens Axboe <axboe@...nel.dk>, Christoph Hellwig <hch@....de>,
 Prashant Malani <pmalani@...gle.com>, linux-nvme@...ts.infradead.org,
 linux-kernel@...r.kernel.org
Subject: Re: [PATCH] nvme-sysfs: display max_hw_sectors_kb without requiring
 namespaces




On 22/10/2024 17:53, Keith Busch wrote:
> On Thu, Oct 17, 2024 at 02:32:18PM -0700, Abhishek Bapat wrote:
>> On Thu, Oct 17, 2024 at 9:40 AM Keith Busch <kbusch@...nel.org> wrote:
>>> On Wed, Oct 16, 2024 at 09:31:08PM +0000, Abhishek Bapat wrote:
>>>> max_hw_sectors based on DMA optimized limitation") introduced a
>>>> limitation on the value of max_hw_sectors_kb, restricting it to 128KiB
>>>> (MDTS = 5). This restricion was implemented to mitigate lockups
>>>> encountered in high-core count AMD servers.
>>> There are other limits that can constrain transfer sizes below the
>>> device's MDTS. For example, the driver can only preallocate so much
>>> space for DMA and SGL descriptors, so 8MB is the current max transfer
>>> sizes the driver can support, and a device's MDTS can be much bigger
>>> than that.
>>>
>>> Anyway, yeah, I guess having a controller generic way to export this
>>> sounds like a good idea, but I wonder if the nvme driver is the right
>>> place to do it. The request_queue has all the limits you need to know
>>> about, but these are only exported if a gendisk is attached to it.
>>> Maybe we can create a queue subdirectory to the char dev too.
>> Are you suggesting that all the files from the queue subdirectory should
>> be included in the char dev (/sys/class/nvme/nvmeX/queue/)? Or that
>> just the max_hw_sectors_kb value should be shared within the queue
>> subdirectory? And if not the nvme driver, where else can this be done
>> from?
> You'd may want to know max_sectors_kb, dma_alignment, nr_requests,
> virt_boundary_mask. Maybe some others.
>
> The request_queue is owned by the block layer, so that seems like an
> okay place to export it, but attached to some other device's sysfs
> directory instead of a gendisk.
>
> I'm just suggesting this because it doesn't sound like this is an nvme
> specific problem.

Won't it be confusing to find queue/ directory in controller nvmeX sysfs 
entry?



Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ