lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 27 Aug 2020 18:19:44 +0800
From:   Shaokun Zhang <zhangshaokun@...ilicon.com>
To:     <linux-fsdevel@...r.kernel.org>, <linux-kernel@...r.kernel.org>
CC:     Yuqi Jin <jinyuqi@...wei.com>,
        kernel test robot <rong.a.chen@...el.com>,
        Will Deacon <will@...nel.org>,
        Mark Rutland <mark.rutland@....com>,
        "Peter Zijlstra" <peterz@...radead.org>,
        Alexander Viro <viro@...iv.linux.org.uk>,
        Boqun Feng <boqun.feng@...il.com>,
        Shaokun Zhang <zhangshaokun@...ilicon.com>
Subject: [PATCH] fs: Optimized fget to improve performance

From: Yuqi Jin <jinyuqi@...wei.com>

It is well known that the performance of atomic_add is better than that of
atomic_cmpxchg.
The initial value of @f_count is 1. While @f_count is increased by 1 in
__fget_files, it will go through three phases: > 0, < 0, and = 0. When the
fixed value 0 is used as the condition for terminating the increase of 1,
only atomic_cmpxchg can be used. When we use < 0 as the condition for
stopping plus 1, we can use atomic_add to obtain better performance.

we test syscall in unixbench on Huawei Kunpeng920(arm64). We've got a 132%
performance boost. 

with this patch and the patch [1]
System Call Overhead                        9516926.2 lps   (10.0 s, 1 samples)

System Benchmarks Partial Index              BASELINE       RESULT    INDEX
System Call Overhead                          15000.0    9516926.2   6344.6
                                                                   ========
System Benchmarks Index Score (Partial Only)                         6344.6

with this patch and without the patch [1]
System Call Overhead                        5290449.3 lps   (10.0 s, 1 samples)

System Benchmarks Partial Index              BASELINE       RESULT    INDEX
System Call Overhead                          15000.0    5290449.3   3527.0
                                                                   ========
System Benchmarks Index Score (Partial Only)                         3527.0

without any patch
System Call Overhead                        4102310.5 lps   (10.0 s, 1 samples)

System Benchmarks Partial Index              BASELINE       RESULT    INDEX
System Call Overhead                          15000.0    4102310.5   2734.9
                                                                   ========
System Benchmarks Index Score (Partial Only)                         2734.9

[1] https://lkml.org/lkml/2020/6/24/283

Cc: kernel test robot <rong.a.chen@...el.com>
Cc: Will Deacon <will@...nel.org>
Cc: Mark Rutland <mark.rutland@....com>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Alexander Viro <viro@...iv.linux.org.uk>
Cc: Boqun Feng <boqun.feng@...il.com>
Signed-off-by: Yuqi Jin <jinyuqi@...wei.com>
Signed-off-by: Shaokun Zhang <zhangshaokun@...ilicon.com>
---
Hi Rong,

Can you help to test this patch individually and with [1] together on
your platform please? [1] has been tested on your platform[2].

[2] https://lkml.org/lkml/2020/7/8/227

 include/linux/fs.h | 13 ++++++++++++-
 1 file changed, 12 insertions(+), 1 deletion(-)

diff --git a/include/linux/fs.h b/include/linux/fs.h
index e019ea2f1347..2a9c2a30dc58 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -972,8 +972,19 @@ static inline struct file *get_file(struct file *f)
 	atomic_long_inc(&f->f_count);
 	return f;
 }
+
+static inline bool get_file_unless_negative(atomic_long_t *v, long a)
+{
+	long c = atomic_long_read(v);
+
+	if (c <= 0)
+		return 0;
+
+	return atomic_long_add_return(a, v) - 1;
+}
+
 #define get_file_rcu_many(x, cnt)	\
-	atomic_long_add_unless(&(x)->f_count, (cnt), 0)
+	get_file_unless_negative(&(x)->f_count, (cnt))
 #define get_file_rcu(x) get_file_rcu_many((x), 1)
 #define file_count(x)	atomic_long_read(&(x)->f_count)
 
-- 
2.7.4

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ