[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250424141249.GA18970@lst.de>
Date: Thu, 24 Apr 2025 16:12:49 +0200
From: Christoph Hellwig <hch@....de>
To: Caleb Sander Mateos <csander@...estorage.com>
Cc: Keith Busch <kbusch@...nel.org>, Jens Axboe <axboe@...nel.dk>,
Christoph Hellwig <hch@....de>, Sagi Grimberg <sagi@...mberg.me>,
Andrew Morton <akpm@...ux-foundation.org>,
Kanchan Joshi <joshi.k@...sung.com>, linux-nvme@...ts.infradead.org,
linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v5 3/3] nvme/pci: make PRP list DMA pools per-NUMA-node
On Tue, Apr 22, 2025 at 04:09:52PM -0600, Caleb Sander Mateos wrote:
> NVMe commands with more than 4 KB of data allocate PRP list pages from
> the per-nvme_device dma_pool prp_page_pool or prp_small_pool.
That's not actually true. We can transfer all of the MDTS without a
single pool allocation when using SGLs.
> Each call
> to dma_pool_alloc() and dma_pool_free() takes the per-dma_pool spinlock.
> These device-global spinlocks are a significant source of contention
> when many CPUs are submitting to the same NVMe devices. On a workload
> issuing 32 KB reads from 16 CPUs (8 hypertwin pairs) across 2 NUMA nodes
> to 23 NVMe devices, we observed 2.4% of CPU time spent in
> _raw_spin_lock_irqsave called from dma_pool_alloc and dma_pool_free.
>
> Ideally, the dma_pools would be per-hctx to minimize
> contention. But that could impose considerable resource costs in a
> system with many NVMe devices and CPUs.
Should we try to simply do a slab allocation first and only allocate
from the dmapool when that fails? That should give you all the
scalability from the slab allocator without very little downsides.
Powered by blists - more mailing lists