[<prev] [next>] [day] [month] [year] [list]
Message-Id: <20240509202929.831680-1-jmeneghi@redhat.com>
Date: Thu, 9 May 2024 16:29:26 -0400
From: John Meneghini <jmeneghi@...hat.com>
To: kbusch@...nel.org,
hch@....de,
sagi@...mberg.me,
emilne@...hat.com
Cc: linux-nvme@...ts.infradead.org,
linux-kernel@...r.kernel.org,
jmeneghi@...hat.com,
jrani@...estorage.com,
randyj@...estorage.com,
hare@...nel.org,
constg@...ibm.com,
aviv.coro@....com
Subject: [PATCH v2 0/3] nvme: queue-depth multipath iopolicy
I'm re-issuing Ewan's queue-depth patches in preparation for LSFMM
These patches were first show at ALPSS 2023 where I shared the following
graphs which measure the IO distribution across 4 active-optimized
controllers using the round-robin verses queue-depth iopolicy.
https://people.redhat.com/jmeneghi/ALPSS_2023/NVMe_QD_Multipathing.pdf
Since that time we have continued testing these patches with a number of
different nvme-of storage arrays and test bed configurations, and I've codified
the tests and methods we use to measure IO distribution
All of my test results, together with the scripts I used to generate these
graphs, are available at:
https://github.com/johnmeneghini/iopolicy
Please use the scripts in this repository to do your own testing.
These patches are based on nvme-v6.9
Ewan D. Milne (3):
nvme: multipath: Implemented new iopolicy "queue-depth"
nvme: multipath: only update ctrl->nr_active when using queue-depth
iopolicy
nvme: multipath: Invalidate current_path when changing iopolicy
drivers/nvme/host/core.c | 2 +-
drivers/nvme/host/multipath.c | 77 +++++++++++++++++++++++++++++++++--
drivers/nvme/host/nvme.h | 8 ++++
3 files changed, 82 insertions(+), 5 deletions(-)
--
2.39.3
Powered by blists - more mailing lists