[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <98c860ce-57ca-4451-a4fa-1af93834e8f5@grimberg.me>
Date: Thu, 23 May 2024 13:02:18 +0300
From: Sagi Grimberg <sagi@...mberg.me>
To: Keith Busch <kbusch@...nel.org>, John Meneghini <jmeneghi@...hat.com>
Cc: hch@....de, emilne@...hat.com, linux-nvme@...ts.infradead.org,
linux-kernel@...r.kernel.org, jrani@...estorage.com, randyj@...estorage.com,
hare@...nel.org
Subject: Re: [PATCH v4 1/1] nvme: multipath: Implemented new iopolicy
"queue-depth"
On 22/05/2024 19:29, Keith Busch wrote:
> On Wed, May 22, 2024 at 12:23:51PM -0400, John Meneghini wrote:
>> On 5/22/24 11:56, Keith Busch wrote:
>>> On Wed, May 22, 2024 at 11:42:12AM -0400, John Meneghini wrote:
>>>> +static void nvme_subsys_iopolicy_update(struct nvme_subsystem *subsys, int iopolicy)
>>>> +{
>>>> + struct nvme_ctrl *ctrl;
>>>> + int old_iopolicy = READ_ONCE(subsys->iopolicy);
>>>> +
>>>> + WRITE_ONCE(subsys->iopolicy, iopolicy);
>>>> +
>>>> + /* iopolicy changes reset the counters and clear the mpath by design */
>>>> + mutex_lock(&nvme_subsystems_lock);
>>>> + list_for_each_entry(ctrl, &subsys->ctrls, subsys_entry) {
>>>> + atomic_set(&ctrl->nr_active, 0);
>>> Can you me understand why this is a desirable feature? Unless you
>>> quiesce everything at some point, you'll always have more unaccounted
>>> requests on whichever path has higher latency. That sounds like it
>>> defeats the goals of this io policy.
>> This is true. And as a matter of practice I never change the IO policy when IOs are in flight. I always stop the IO first.
>> But we can't stop any user from changing the IO policy again and again. So I'm not sure what to do.
>>
>> If you'd like I add the 'if (old_iopolicy == iopolicy) return;' here, but
>> that's not going to solve the problem of inaccurate counters when users
>> start flipping io policies around. with IO inflight. There is no
>> synchronization between io submission across controllers and changing the
>> policy so I expect changing between round-robin and queue-depth with IO
>> inflight suffers from the same problem... though not as badly.
>>
>> I'd rather take this patch now and figure out how to fix the problem with
>> another patch in the future. Maybe we can check the io stats and refuse to
>> change the policy of they are not zero....
> The idea of tagging the nvme_req()->flags on submission means the
> completion handling the nr_active counter is symmetric with the
> submission side: you don't ever need to reset nr_active because
> everything is accounted for.
Agree. We should probably remove it from the patch.
Powered by blists - more mailing lists