[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4abe98f4-8179-4422-aee5-ae47552e28b7@nvidia.com>
Date: Tue, 5 Mar 2024 16:46:33 +0000
From: Chaitanya Kulkarni <chaitanyak@...dia.com>
To: Jens Axboe <axboe@...nel.dk>, Keith Busch <kbusch@...nel.org>, Leon
Romanovsky <leon@...nel.org>
CC: Christoph Hellwig <hch@....de>, Robin Murphy <robin.murphy@....com>, Marek
Szyprowski <m.szyprowski@...sung.com>, Joerg Roedel <joro@...tes.org>, Will
Deacon <will@...nel.org>, Jason Gunthorpe <jgg@...pe.ca>, Jonathan Corbet
<corbet@....net>, Sagi Grimberg <sagi@...mberg.me>, Yishai Hadas
<yishaih@...dia.com>, Shameer Kolothum
<shameerali.kolothum.thodi@...wei.com>, Kevin Tian <kevin.tian@...el.com>,
Alex Williamson <alex.williamson@...hat.com>,
Jérôme Glisse <jglisse@...hat.com>, Andrew Morton
<akpm@...ux-foundation.org>, "linux-doc@...r.kernel.org"
<linux-doc@...r.kernel.org>, "linux-kernel@...r.kernel.org"
<linux-kernel@...r.kernel.org>, "linux-block@...r.kernel.org"
<linux-block@...r.kernel.org>, "linux-rdma@...r.kernel.org"
<linux-rdma@...r.kernel.org>, "iommu@...ts.linux.dev"
<iommu@...ts.linux.dev>, "linux-nvme@...ts.infradead.org"
<linux-nvme@...ts.infradead.org>, "kvm@...r.kernel.org"
<kvm@...r.kernel.org>, "linux-mm@...ck.org" <linux-mm@...ck.org>, Bart Van
Assche <bvanassche@....org>, Damien Le Moal
<damien.lemoal@...nsource.wdc.com>, Amir Goldstein <amir73il@...il.com>,
"josef@...icpanda.com" <josef@...icpanda.com>, "Martin K. Petersen"
<martin.petersen@...cle.com>, "daniel@...earbox.net" <daniel@...earbox.net>,
Dan Williams <dan.j.williams@...el.com>, "jack@...e.com" <jack@...e.com>,
Leon Romanovsky <leonro@...dia.com>, Zhu Yanjun <zyjzyj2000@...il.com>
Subject: Re: [RFC RESEND 16/16] nvme-pci: use blk_rq_dma_map() for NVMe SGL
On 3/5/24 08:39, Chaitanya Kulkarni wrote:
> On 3/5/24 08:08, Jens Axboe wrote:
>> On 3/5/24 8:51 AM, Keith Busch wrote:
>>> On Tue, Mar 05, 2024 at 01:18:47PM +0200, Leon Romanovsky wrote:
>>>> @@ -236,7 +236,9 @@ struct nvme_iod {
>>>> unsigned int dma_len; /* length of single DMA segment mapping */
>>>> dma_addr_t first_dma;
>>>> dma_addr_t meta_dma;
>>>> - struct sg_table sgt;
>>>> + struct dma_iova_attrs iova;
>>>> + dma_addr_t dma_link_address[128];
>>>> + u16 nr_dma_link_address;
>>>> union nvme_descriptor list[NVME_MAX_NR_ALLOCATIONS];
>>>> };
>>> That's quite a lot of space to add to the iod. We preallocate one for
>>> every request, and there could be millions of them.
>> Yeah, that's just a complete non-starter. As far as I can tell, this
>> ends up adding 1052 bytes per request. Doing the quick math on my test
>> box (24 drives), that's just a smidge over 3GB of extra memory. That's
>> not going to work, not even close.
>>
> I don't have any intent to use more space for the nvme_iod than what
> it is now. I'll trim down the iod structure and send out a patch soon with
> this fixed to continue the discussion here on this thread ...
>
> -ck
>
>
For final version when DMA API is discussion is concluded, I've plan to use
the iod_mempool for allocation of nvme_iod->dma_link_address, however I'
not wait for that and send out a updated version with trimmed nvme_iod size.
If you guys have any other comments please let me know or we can
continue the
discussion on once I post new version of this patch on this thread ...
Thanks a log Keith and Jens for looking into it ...
-ck
Powered by blists - more mailing lists