[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230725110622.129361-1-ptyadav@amazon.de>
Date: Tue, 25 Jul 2023 13:06:22 +0200
From: Pratyush Yadav <ptyadav@...zon.de>
To: Keith Busch <kbusch@...nel.org>, Jens Axboe <axboe@...nel.dk>,
"Christoph Hellwig" <hch@....de>, Sagi Grimberg <sagi@...mberg.me>
CC: Pratyush Yadav <ptyadav@...zon.de>,
<linux-nvme@...ts.infradead.org>, <linux-kernel@...r.kernel.org>
Subject: [PATCH] nvme-pci: do not set the NUMA node of device if it has none
If a device has no NUMA node information associated with it, the driver
puts the device in node first_memory_node (say node 0). As a side
effect, this gives an indication to userspace IRQ balancing programs
that the device is in node 0 so they prefer CPUs in node 0 to handle the
IRQs associated with the queues. For example, irqbalance will only let
CPUs in node 0 handle the interrupts. This reduces random access
performance on CPUs in node 1 since the interrupt for command completion
will fire on node 0.
For example, AWS EC2's i3.16xlarge instance does not expose NUMA
information for the NVMe devices. This means all NVMe devices have
NUMA_NO_NODE by default. Without this patch, random 4k read performance
measured via fio on CPUs from node 1 (around 165k IOPS) is almost 50%
less than CPUs from node 0 (around 315k IOPS). With this patch, CPUs on
both nodes get similar performance (around 315k IOPS).
Signed-off-by: Pratyush Yadav <ptyadav@...zon.de>
---
drivers/nvme/host/pci.c | 3 ---
1 file changed, 3 deletions(-)
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index baf69af7ea78e..f5ba2d7102eae 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -2916,9 +2916,6 @@ static struct nvme_dev *nvme_pci_alloc_dev(struct pci_dev *pdev,
struct nvme_dev *dev;
int ret = -ENOMEM;
- if (node == NUMA_NO_NODE)
- set_dev_node(&pdev->dev, first_memory_node);
-
dev = kzalloc_node(sizeof(*dev), GFP_KERNEL, node);
if (!dev)
return ERR_PTR(-ENOMEM);
--
2.40.1
Powered by blists - more mailing lists