[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20251231222637.GL3864520-mkhalfella@purestorage.com>
Date: Wed, 31 Dec 2025 14:26:37 -0800
From: Mohamed Khalfella <mkhalfella@...estorage.com>
To: Randy Jennings <randyj@...estorage.com>
Cc: Chaitanya Kulkarni <kch@...dia.com>, Christoph Hellwig <hch@....de>,
Jens Axboe <axboe@...nel.dk>, Keith Busch <kbusch@...nel.org>,
Sagi Grimberg <sagi@...mberg.me>,
Aaron Dailey <adailey@...estorage.com>,
John Meneghini <jmeneghi@...hat.com>,
Hannes Reinecke <hare@...e.de>, linux-nvme@...ts.infradead.org,
linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH 06/14] nvme: Rapid Path Failure Recovery read
controller identify fields
On Thu 2025-12-18 07:22:41 -0800, Randy Jennings wrote:
> On Tue, Nov 25, 2025 at 6:13 PM Mohamed Khalfella
> <mkhalfella@...estorage.com> wrote:
> >
> > TP2028 Rapid path failure added new fileds to controller identify
> TP8028
Fixed.
> > response. Read CIU (Controller Instance Uniquifier), CIRN (Controller
> > Instance Random Number), and CCRL (Cross-Controller Reset Limit) from
> > controller identify response. Expose CIU and CIRN as sysfs attributes
> > so the values can be used directrly by user if needed.
> >
> > TP4129 KATO Corrections and Clarifications defined CQT (Command Quiesce
> > Time) which is used along with KATO (Keep Alive Timeout) to set an upper
> > limite for attempting Cross-Controller Recovery.
> "limite" -> "limit"
Fixed.
> >
> > Signed-off-by: Mohamed Khalfella <mkhalfella@...estorage.com>
> > ---
> > drivers/nvme/host/core.c | 5 +++++
> > drivers/nvme/host/nvme.h | 11 +++++++++++
> > drivers/nvme/host/sysfs.c | 23 +++++++++++++++++++++++
> > 3 files changed, 39 insertions(+)
> >
> > diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
> > index fa4181d7de73..aa007a7b9606 100644
> > --- a/drivers/nvme/host/core.c
> > +++ b/drivers/nvme/host/core.c
> > @@ -3572,12 +3572,17 @@ static int nvme_init_identify(struct nvme_ctrl *ctrl)
> > ctrl->crdt[1] = le16_to_cpu(id->crdt2);
> > ctrl->crdt[2] = le16_to_cpu(id->crdt3);
> >
> > + ctrl->ciu = id->ciu;
> > + ctrl->cirn = le64_to_cpu(id->cirn);
> > + atomic_set(&ctrl->ccr_limit, id->ccrl);
> Seems like it would be good for the target & init to use the same
> name for these fields. I have a preference for these over
> instance_uniquifier and random because they are more concise, but
> the preference is not strong.
The field names in the spec are concise, but they are also cryptic.
>
> > +
> > ctrl->oacs = le16_to_cpu(id->oacs);
> > ctrl->oncs = le16_to_cpu(id->oncs);
> > ctrl->mtfa = le16_to_cpu(id->mtfa);
> > ctrl->oaes = le32_to_cpu(id->oaes);
> > ctrl->wctemp = le16_to_cpu(id->wctemp);
> > ctrl->cctemp = le16_to_cpu(id->cctemp);
> > + ctrl->cqt = le16_to_cpu(id->cqt);
> >
> > atomic_set(&ctrl->abort_limit, id->acl + 1);
> > ctrl->vwc = id->vwc;
> I cannot discern an ordering to the attributes set here. Any
> particular reason, you placed cqt away from the others you added?
No reason. Moved ctrl->cqt initialization up with other fields.
>
> > diff --git a/drivers/nvme/host/sysfs.c b/drivers/nvme/host/sysfs.c
> > index 29430949ce2f..ae36249ad61e 100644
> > --- a/drivers/nvme/host/sysfs.c
> > +++ b/drivers/nvme/host/sysfs.c
> > @@ -388,6 +388,27 @@ nvme_show_int_function(queue_count);
> > nvme_show_int_function(sqsize);
> > nvme_show_int_function(kato);
> >
> > +static ssize_t nvme_sysfs_uniquifier_show(struct device *dev,
> > + struct device_attribute *attr,
> > + char *buf)
> > +{
> > + struct nvme_ctrl *ctrl = dev_get_drvdata(dev);
> > +
> > + return sysfs_emit(buf, "%02x\n", ctrl->ciu);
> > +}
> > +static DEVICE_ATTR(uniquifier, S_IRUGO, nvme_sysfs_uniquifier_show, NULL);
> > +
> > +static ssize_t nvme_sysfs_random_show(struct device *dev,
> > + struct device_attribute *attr,
> > + char *buf)
> > +{
> > + struct nvme_ctrl *ctrl = dev_get_drvdata(dev);
> > +
> > + return sysfs_emit(buf, "%016llx\n", ctrl->cirn);
> > +}
> > +static DEVICE_ATTR(random, S_IRUGO, nvme_sysfs_random_show, NULL);
> > +
> > +
> > static ssize_t nvme_sysfs_delete(struct device *dev,
> > struct device_attribute *attr, const char *buf,
> > size_t count)
> > @@ -734,6 +755,8 @@ static struct attribute *nvme_dev_attrs[] = {
> > &dev_attr_numa_node.attr,
> > &dev_attr_queue_count.attr,
> > &dev_attr_sqsize.attr,
> > + &dev_attr_uniquifier.attr,
> > + &dev_attr_random.attr,
> > &dev_attr_hostnqn.attr,
> > &dev_attr_hostid.attr,
> > &dev_attr_ctrl_loss_tmo.attr,
> > --
> > 2.51.2
> >
>
> These are the names used in the target code (uniquifer & random.
> I'd rather have them match (identify structure will have spec's
> abbreviations; ctrl & debug/sysfs for target & initiator either be
> ciu/cirn or uniquifer/random.
I think it matters for sysfs attributes. I do not know the right thing
to do. Should we use spec names like "cirn" or call it "random"?
>
> But this is small stuff.
>
> Reviewed-by: Randy Jennings <randyj@...estorage.com>
Powered by blists - more mailing lists