[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240511144048767fdB7EqYoMHEw6A5b6FrXM@zte.com.cn>
Date: Sat, 11 May 2024 14:40:48 +0800 (CST)
From: <xu.xin16@....com.cn>
To: <akpm@...ux-foundation.org>
Cc: <david@...hat.com>, <willy@...radead.org>, <shy828301@...il.com>,
<linux-kernel@...r.kernel.org>, <linux-mm@...ck.org>
Subject: [PATCH linux-next] mm/huge_memory: remove redundant locking when parsing THP sysfs input
From: Ran Xiaokai <ran.xiaokai@....com.cn>
Since sysfs_streq() only performs a simple memory comparison operation
and will not introduce any sleepable operation, So there is no
need to drop the lock when parsing input. Remove redundant lock
and unlock operations to make code cleaner.
Signed-off-by: Ran Xiaokai <ran.xiaokai@....com.cn>
---
mm/huge_memory.c | 10 ++--------
1 file changed, 2 insertions(+), 8 deletions(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 89f58c7603b2..87123a87cb21 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -478,32 +478,26 @@ static ssize_t thpsize_enabled_store(struct kobject *kobj,
int order = to_thpsize(kobj)->order;
ssize_t ret = count;
+ spin_lock(&huge_anon_orders_lock);
if (sysfs_streq(buf, "always")) {
- spin_lock(&huge_anon_orders_lock);
clear_bit(order, &huge_anon_orders_inherit);
clear_bit(order, &huge_anon_orders_madvise);
set_bit(order, &huge_anon_orders_always);
- spin_unlock(&huge_anon_orders_lock);
} else if (sysfs_streq(buf, "inherit")) {
- spin_lock(&huge_anon_orders_lock);
clear_bit(order, &huge_anon_orders_always);
clear_bit(order, &huge_anon_orders_madvise);
set_bit(order, &huge_anon_orders_inherit);
- spin_unlock(&huge_anon_orders_lock);
} else if (sysfs_streq(buf, "madvise")) {
- spin_lock(&huge_anon_orders_lock);
clear_bit(order, &huge_anon_orders_always);
clear_bit(order, &huge_anon_orders_inherit);
set_bit(order, &huge_anon_orders_madvise);
- spin_unlock(&huge_anon_orders_lock);
} else if (sysfs_streq(buf, "never")) {
- spin_lock(&huge_anon_orders_lock);
clear_bit(order, &huge_anon_orders_always);
clear_bit(order, &huge_anon_orders_inherit);
clear_bit(order, &huge_anon_orders_madvise);
- spin_unlock(&huge_anon_orders_lock);
} else
ret = -EINVAL;
+ spin_unlock(&huge_anon_orders_lock);
return ret;
}
--
2.15.2
Powered by blists - more mailing lists