[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20251115173341.4a59c97f@pumpkin>
Date: Sat, 15 Nov 2025 17:33:41 +0000
From: David Laight <david.laight.linux@...il.com>
To: Leon Romanovsky <leon@...nel.org>
Cc: Jens Axboe <axboe@...nel.dk>, Keith Busch <kbusch@...nel.org>, Christoph
Hellwig <hch@....de>, Sagi Grimberg <sagi@...mberg.me>,
linux-block@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-nvme@...ts.infradead.org
Subject: Re: [PATCH 1/2] nvme-pci: Use size_t for length fields to handle
larger sizes
On Sat, 15 Nov 2025 18:22:45 +0200
Leon Romanovsky <leon@...nel.org> wrote:
> From: Leon Romanovsky <leonro@...dia.com>
>
> This patch changes the length variables from unsigned int to size_t.
> Using size_t ensures that we can handle larger sizes, as size_t is
> always equal to or larger than the previously used u32 type.
Where are requests larger than 4GB going to come from?
> Originally, u32 was used because blk-mq-dma code evolved from
> scatter-gather implementation, which uses unsigned int to describe length.
> This change will also allow us to reuse the existing struct phys_vec in places
> that don't need scatter-gather.
>
> Signed-off-by: Leon Romanovsky <leonro@...dia.com>
> ---
> block/blk-mq-dma.c | 14 +++++++++-----
> drivers/nvme/host/pci.c | 4 ++--
> 2 files changed, 11 insertions(+), 7 deletions(-)
>
> diff --git a/block/blk-mq-dma.c b/block/blk-mq-dma.c
> index e9108ccaf4b0..cc3e2548cc30 100644
> --- a/block/blk-mq-dma.c
> +++ b/block/blk-mq-dma.c
> @@ -8,7 +8,7 @@
>
> struct phys_vec {
> phys_addr_t paddr;
> - u32 len;
> + size_t len;
> };
>
> static bool __blk_map_iter_next(struct blk_map_iter *iter)
> @@ -112,8 +112,8 @@ static bool blk_rq_dma_map_iova(struct request *req, struct device *dma_dev,
> struct phys_vec *vec)
> {
> enum dma_data_direction dir = rq_dma_dir(req);
> - unsigned int mapped = 0;
> unsigned int attrs = 0;
> + size_t mapped = 0;
> int error;
>
> iter->addr = state->addr;
> @@ -296,8 +296,10 @@ int __blk_rq_map_sg(struct request *rq, struct scatterlist *sglist,
> blk_rq_map_iter_init(rq, &iter);
> while (blk_map_iter_next(rq, &iter, &vec)) {
> *last_sg = blk_next_sg(last_sg, sglist);
> - sg_set_page(*last_sg, phys_to_page(vec.paddr), vec.len,
> - offset_in_page(vec.paddr));
> +
> + WARN_ON_ONCE(overflows_type(vec.len, unsigned int));
I'm not at all sure you need that test.
blk_map_iter_next() has to guarantee that vec.len is valid.
(probably even less than a page size?)
Perhaps this code should be using a different type for the addr:len pair?
> + sg_set_page(*last_sg, phys_to_page(vec.paddr),
> + (unsigned int)vec.len, offset_in_page(vec.paddr));
You definitely don't need the explicit cast.
David
> nsegs++;
> }
>
> @@ -416,7 +418,9 @@ int blk_rq_map_integrity_sg(struct request *rq, struct scatterlist *sglist)
>
> while (blk_map_iter_next(rq, &iter, &vec)) {
> sg = blk_next_sg(&sg, sglist);
> - sg_set_page(sg, phys_to_page(vec.paddr), vec.len,
> +
> + WARN_ON_ONCE(overflows_type(vec.len, unsigned int));
> + sg_set_page(sg, phys_to_page(vec.paddr), (unsigned int)vec.len,
> offset_in_page(vec.paddr));
> segments++;
> }
> diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
> index 9085bed107fd..de512efa742d 100644
> --- a/drivers/nvme/host/pci.c
> +++ b/drivers/nvme/host/pci.c
> @@ -290,14 +290,14 @@ struct nvme_iod {
> u8 flags;
> u8 nr_descriptors;
>
> - unsigned int total_len;
> + size_t total_len;
> struct dma_iova_state dma_state;
> void *descriptors[NVME_MAX_NR_DESCRIPTORS];
> struct nvme_dma_vec *dma_vecs;
> unsigned int nr_dma_vecs;
>
> dma_addr_t meta_dma;
> - unsigned int meta_total_len;
> + size_t meta_total_len;
> struct dma_iova_state meta_dma_state;
> struct nvme_sgl_desc *meta_descriptor;
> };
>
Powered by blists - more mailing lists