[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20200919104546.3848-3-thunder.leizhen@huawei.com>
Date: Sat, 19 Sep 2020 18:45:46 +0800
From: Zhen Lei <thunder.leizhen@...wei.com>
To: Oliver O'Halloran <oohall@...il.com>,
Dan Williams <dan.j.williams@...el.com>,
Vishal Verma <vishal.l.verma@...el.com>,
"Dave Jiang" <dave.jiang@...el.com>,
Ira Weiny <ira.weiny@...el.com>,
Markus Elfring <Markus.Elfring@....de>,
linux-nvdimm <linux-nvdimm@...ts.01.org>,
linux-kernel <linux-kernel@...r.kernel.org>
CC: Zhen Lei <thunder.leizhen@...wei.com>,
Libin <huawei.libin@...wei.com>,
Kefeng Wang <wangkefeng.wang@...wei.com>
Subject: [PATCH v2 2/2] libnvdimm/badrange: eliminate a meaningless spinlock operation
badrange_add() take the lock "badrange->lock", but it's released
immediately in add_badrange(), protect nothing.
The pseudo code is as follows:
In badrange_add():
spin_lock(&badrange->lock); <---------------
rc = add_badrange(badrange, addr, length); |
In add_badrange(): |
//do nothing |
spin_unlock(&badrange->lock); <---------------
bre_new = kzalloc(sizeof(*bre_new), GFP_KERNEL);
spin_lock(&badrange->lock); <--- lock again
This lock/unlock operation is meaningless.
Because the static function add_badrange() is only called by
badrange_add(), so move its content into badrange_add() then delete it.
By the way, move "kfree(bre_new)" out of the lock protection, it really
doesn't need.
Fixes: b3b454f694db ("libnvdimm: fix clear poison locking with spinlock ...")
Signed-off-by: Zhen Lei <thunder.leizhen@...wei.com>
---
drivers/nvdimm/badrange.c | 22 ++++++++--------------
1 file changed, 8 insertions(+), 14 deletions(-)
diff --git a/drivers/nvdimm/badrange.c b/drivers/nvdimm/badrange.c
index 9fdba8c43e8605e..7f78b659057902d 100644
--- a/drivers/nvdimm/badrange.c
+++ b/drivers/nvdimm/badrange.c
@@ -45,12 +45,12 @@ static int alloc_and_append_badrange_entry(struct badrange *badrange,
return 0;
}
-static int add_badrange(struct badrange *badrange, u64 addr, u64 length)
+int badrange_add(struct badrange *badrange, u64 addr, u64 length)
{
struct badrange_entry *bre, *bre_new;
- spin_unlock(&badrange->lock);
bre_new = kzalloc(sizeof(*bre_new), GFP_KERNEL);
+
spin_lock(&badrange->lock);
/*
@@ -63,6 +63,7 @@ static int add_badrange(struct badrange *badrange, u64 addr, u64 length)
/* If length has changed, update this list entry */
if (bre->length != length)
bre->length = length;
+ spin_unlock(&badrange->lock);
kfree(bre_new);
return 0;
}
@@ -72,22 +73,15 @@ static int add_badrange(struct badrange *badrange, u64 addr, u64 length)
* as any overlapping ranges will get resolved when the list is consumed
* and converted to badblocks
*/
- if (!bre_new)
+ if (!bre_new) {
+ spin_unlock(&badrange->lock);
return -ENOMEM;
- append_badrange_entry(badrange, bre_new, addr, length);
-
- return 0;
-}
-
-int badrange_add(struct badrange *badrange, u64 addr, u64 length)
-{
- int rc;
+ }
- spin_lock(&badrange->lock);
- rc = add_badrange(badrange, addr, length);
+ append_badrange_entry(badrange, bre_new, addr, length);
spin_unlock(&badrange->lock);
- return rc;
+ return 0;
}
EXPORT_SYMBOL_GPL(badrange_add);
--
1.8.3
Powered by blists - more mailing lists