[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <68cdd157cf7be_1a2a17294f1@iweiny-mobl.notmuch>
Date: Fri, 19 Sep 2025 16:55:35 -0500
From: Ira Weiny <ira.weiny@...el.com>
To: Neeraj Kumar <s.neeraj@...sung.com>, <linux-cxl@...r.kernel.org>,
<nvdimm@...ts.linux.dev>, <linux-kernel@...r.kernel.org>,
<gost.dev@...sung.com>
CC: <a.manzanares@...sung.com>, <vishak.g@...sung.com>,
<neeraj.kernel@...il.com>, <cpgs@...sung.com>, Neeraj Kumar
<s.neeraj@...sung.com>
Subject: Re: [PATCH V3 04/20] nvdimm/label: Update mutex_lock() with
guard(mutex)()
Neeraj Kumar wrote:
> Updated mutex_lock() with guard(mutex)()
You are missing the 'why' justification here.
The detail is that __pmem_label_update() is getting more complex and this
change helps to reduce the complexity later.
However...
[snip]
> @@ -998,9 +998,8 @@ static int init_labels(struct nd_mapping *nd_mapping, int num_labels)
> label_ent = kzalloc(sizeof(*label_ent), GFP_KERNEL);
> if (!label_ent)
> return -ENOMEM;
> - mutex_lock(&nd_mapping->lock);
> + guard(mutex)(&nd_mapping->lock);
> list_add_tail(&label_ent->list, &nd_mapping->labels);
> - mutex_unlock(&nd_mapping->lock);
... this change is of little value. And...
> }
>
> if (ndd->ns_current == -1 || ndd->ns_next == -1)
> @@ -1039,7 +1038,7 @@ static int del_labels(struct nd_mapping *nd_mapping, uuid_t *uuid)
> if (!preamble_next(ndd, &nsindex, &free, &nslot))
> return 0;
>
> - mutex_lock(&nd_mapping->lock);
> + guard(mutex)(&nd_mapping->lock);
> list_for_each_entry_safe(label_ent, e, &nd_mapping->labels, list) {
> struct nd_namespace_label *nd_label = label_ent->label;
>
> @@ -1061,7 +1060,6 @@ static int del_labels(struct nd_mapping *nd_mapping, uuid_t *uuid)
> nd_mapping_free_labels(nd_mapping);
> dev_dbg(ndd->dev, "no more active labels\n");
> }
> - mutex_unlock(&nd_mapping->lock);
... this technically changes the scope of the lock to include writing the
index under the lock.
It does not affect anything AFAICS but really these last two changes
should be dropped from this patch.
Ira
>
> return nd_label_write_index(ndd, ndd->ns_next,
> nd_inc_seq(__le32_to_cpu(nsindex->seq)), 0);
> --
> 2.34.1
>
>
Powered by blists - more mailing lists