[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Zk4VtiCjeqkBKCBA@kbusch-mbp.dhcp.thefacebook.com>
Date: Wed, 22 May 2024 09:56:38 -0600
From: Keith Busch <kbusch@...nel.org>
To: John Meneghini <jmeneghi@...hat.com>
Cc: hch@....de, sagi@...mberg.me, emilne@...hat.com,
linux-nvme@...ts.infradead.org, linux-kernel@...r.kernel.org,
jrani@...estorage.com, randyj@...estorage.com, hare@...nel.org
Subject: Re: [PATCH v4 1/1] nvme: multipath: Implemented new iopolicy
"queue-depth"
On Wed, May 22, 2024 at 11:42:12AM -0400, John Meneghini wrote:
> +static void nvme_subsys_iopolicy_update(struct nvme_subsystem *subsys, int iopolicy)
> +{
> + struct nvme_ctrl *ctrl;
> + int old_iopolicy = READ_ONCE(subsys->iopolicy);
> +
> + WRITE_ONCE(subsys->iopolicy, iopolicy);
> +
> + /* iopolicy changes reset the counters and clear the mpath by design */
> + mutex_lock(&nvme_subsystems_lock);
> + list_for_each_entry(ctrl, &subsys->ctrls, subsys_entry) {
> + atomic_set(&ctrl->nr_active, 0);
Can you me understand why this is a desirable feature? Unless you
quiesce everything at some point, you'll always have more unaccounted
requests on whichever path has higher latency. That sounds like it
defeats the goals of this io policy.
> @@ -1061,6 +1066,9 @@ static inline bool nvme_disk_is_ns_head(struct gendisk *disk)
> {
> return false;
> }
> +static inline void nvme_subsys_iopolicy_update(struct nvme_subsystem *subsys, int iopolicy)
> +{
> +}
> #endif /* CONFIG_NVME_MULTIPATH */
You can remove this stub function since the only caller resides in a
CONFIG_NVME_MULTIPATH file.
Powered by blists - more mailing lists