[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Zk4sEpypKqeU67dg@kbusch-mbp.dhcp.thefacebook.com>
Date: Wed, 22 May 2024 11:32:02 -0600
From: Keith Busch <kbusch@...nel.org>
To: John Meneghini <jmeneghi@...hat.com>
Cc: hch@....de, sagi@...mberg.me, emilne@...hat.com,
linux-nvme@...ts.infradead.org, linux-kernel@...r.kernel.org,
jrani@...estorage.com, randyj@...estorage.com, hare@...nel.org
Subject: Re: [PATCH v5] nvme: multipath: Implemented new iopolicy
"queue-depth"
On Wed, May 22, 2024 at 12:54:06PM -0400, John Meneghini wrote:
> From: "Ewan D. Milne" <emilne@...hat.com>
>
> The round-robin path selector is inefficient in cases where there is a
> difference in latency between paths. In the presence of one or more
> high latency paths the round-robin selector continues to use the high
> latency path equally. This results in a bias towards the highest latency
> path and can cause a significant decrease in overall performance as IOs
> pile on the highest latency path. This problem is acute with NVMe-oF
> controllers.
>
> The queue-depth policy instead sends I/O requests down the path with the
> least amount of requests in its request queue. Paths with lower latency
> will clear requests more quickly and have less requests in their queues
> compared to higher latency paths. The goal of this path selector is to
> make more use of lower latency paths which will bring down overall IO
> latency and increase throughput and performance.
I'm okay with this as-is, though I don't think you need either
atomic_set() calls.
Christoph, Sagi, 6.10 merge window is still open and this has been
iterating long before that. Any objection?
Powered by blists - more mailing lists