[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <aRz8-IB6dq5tlpae@kbusch-mbp>
Date: Tue, 18 Nov 2025 16:10:48 -0700
From: Keith Busch <kbusch@...nel.org>
To: Christoph Hellwig <hch@....de>
Cc: Leon Romanovsky <leon@...nel.org>, Jens Axboe <axboe@...nel.dk>,
Sagi Grimberg <sagi@...mberg.me>, linux-block@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-nvme@...ts.infradead.org,
Chaitanya Kulkarni <kch@...dia.com>
Subject: Re: [PATCH v2 1/2] nvme-pci: Use size_t for length fields to handle
larger sizes
On Tue, Nov 18, 2025 at 06:18:23AM +0100, Christoph Hellwig wrote:
> On Mon, Nov 17, 2025 at 12:35:40PM -0700, Keith Busch wrote:
> > > + size_t total_len;
> >
> > Changing the generic phys_vec sounds fine, but the nvme driver has a 8MB
> > limitation on how large an IO can be, so I don't think the driver's
> > length needs to match the phys_vec type.
>
> With the new dma mapping interface we could lift that limits for
> SGL-based controllers as we basically only have a nr_segments limit now.
> Not that I'm trying to argue for multi-GB I/O..
It's not a bad idea. The tricky part is in the timeout handling. If
we allow very large IO, I think we need a dynamic timeout value to
account for the link's throughput. We can already trigger blk-mq
timeouts if you saturate enough queues with max sized IO, despite
everything else working-as-designed.
Powered by blists - more mailing lists