[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240624084627.GA20032@lst.de>
Date: Mon, 24 Jun 2024 10:46:27 +0200
From: Christoph Hellwig <hch@....de>
To: John Meneghini <jmeneghi@...hat.com>
Cc: Christoph Hellwig <hch@....de>, kbusch@...nel.org, sagi@...mberg.me,
linux-nvme@...ts.infradead.org, linux-kernel@...r.kernel.org,
emilne@...hat.com, jrani@...estorage.com, randyj@...estorage.com,
chaitanyak@...dia.com, hare@...nel.org
Subject: Re: [PATCH v7 1/1] nvme-multipath: implement "queue-depth" iopolicy
On Thu, Jun 20, 2024 at 01:54:29PM -0400, John Meneghini wrote:
>>> +static void nvme_subsys_iopolicy_update(struct nvme_subsystem *subsys,
>>> + int iopolicy)
>>> +{
>>> + struct nvme_ctrl *ctrl;
>>> + int old_iopolicy = READ_ONCE(subsys->iopolicy);
>>> +
>>> + if (old_iopolicy == iopolicy)
>>> + return;
>>> +
>>> + WRITE_ONCE(subsys->iopolicy, iopolicy);
>>
>> What is the atomicy model here? There doesn't seem to be any
>> global lock protecting it? Maybe move it into the
>> nvme_subsystems_lock critical section?
>
> Good question. I didn't write this code. Yes, I agree this looks racy.
> Updates to the subsys->iopolicy variable are not atomic. They don't need to
> be. The process of changing the iopolicy doesn't need to be synchronized
> and each CPU's cache will be updated lazily. This was done to avoid the
> expense of adding (another) atomic read the io path.
Looks like all sysfs ->store calls for the same attribute are protected
by of->mutex in kernfs_fop_write_iter and we should actually be fine
here. Sorry for the noise.
>> pr_notice("%s: changing iopolicy from %s to %s\n",
>> subsys->subnqn,
>> nvme_iopolicy_names[old_iopolicy],
>> nvme_iopolicy_names[iopolicy]);
>
> How about:
>
> pr_notice("Changed iopolicy from %s to %s for subsysnqn %s\n",
> nvme_iopolicy_names[old_iopolicy],
> nvme_iopolicy_names[iopolicy],
> subsys->subnqn);
Having the identification as the prefixe seems easier to parse
and grep for.
Powered by blists - more mailing lists