[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220902121406.112426098@linuxfoundation.org>
Date: Fri, 2 Sep 2022 14:19:22 +0200
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org, Andrei Vagin <avagin@...il.com>,
Dmitry Safonov <dima@...sta.com>,
Thomas Gleixner <tglx@...utronix.de>,
Sasha Levin <sashal@...nel.org>
Subject: [PATCH 5.4 73/77] lib/vdso: Mark do_hres() and do_coarse() as __always_inline
From: Andrei Vagin <avagin@...il.com>
[ Upstream commit c966533f8c6c45f93c52599f8460e7695f0b7eaa ]
Performance numbers for Intel(R) Core(TM) i5-6300U CPU @ 2.40GHz
(more clock_gettime() cycles - the better):
clock | before | after | diff
----------------------------------------------------------
monotonic | 153222105 | 166775025 | 8.8%
monotonic-coarse | 671557054 | 691513017 | 3.0%
monotonic-raw | 147116067 | 161057395 | 9.5%
boottime | 153446224 | 166962668 | 9.1%
The improvement for arm64 for monotonic and boottime is around 3.5%.
clock | before | after | diff
==================================================
monotonic 17326692 17951770 3.6%
monotonic-coarse 43624027 44215292 1.3%
monotonic-raw 17541809 17554932 0.1%
boottime 17334982 17954361 3.5%
[ tglx: Avoid the goto ]
Signed-off-by: Andrei Vagin <avagin@...il.com>
Signed-off-by: Dmitry Safonov <dima@...sta.com>
Signed-off-by: Thomas Gleixner <tglx@...utronix.de>
Link: https://lore.kernel.org/r/20191112012724.250792-3-dima@arista.com
Signed-off-by: Sasha Levin <sashal@...nel.org>
---
lib/vdso/gettimeofday.c | 14 ++++++++------
1 file changed, 8 insertions(+), 6 deletions(-)
diff --git a/lib/vdso/gettimeofday.c b/lib/vdso/gettimeofday.c
index c549e72758aa0..5667fb746a1fe 100644
--- a/lib/vdso/gettimeofday.c
+++ b/lib/vdso/gettimeofday.c
@@ -38,7 +38,7 @@ u64 vdso_calc_delta(u64 cycles, u64 last, u64 mask, u32 mult)
}
#endif
-static int do_hres(const struct vdso_data *vd, clockid_t clk,
+static __always_inline int do_hres(const struct vdso_data *vd, clockid_t clk,
struct __kernel_timespec *ts)
{
const struct vdso_timestamp *vdso_ts = &vd->basetime[clk];
@@ -68,8 +68,8 @@ static int do_hres(const struct vdso_data *vd, clockid_t clk,
return 0;
}
-static int do_coarse(const struct vdso_data *vd, clockid_t clk,
- struct __kernel_timespec *ts)
+static __always_inline int do_coarse(const struct vdso_data *vd, clockid_t clk,
+ struct __kernel_timespec *ts)
{
const struct vdso_timestamp *vdso_ts = &vd->basetime[clk];
u32 seq;
@@ -99,13 +99,15 @@ __cvdso_clock_gettime_common(clockid_t clock, struct __kernel_timespec *ts)
*/
msk = 1U << clock;
if (likely(msk & VDSO_HRES))
- return do_hres(&vd[CS_HRES_COARSE], clock, ts);
+ vd = &vd[CS_HRES_COARSE];
else if (msk & VDSO_COARSE)
return do_coarse(&vd[CS_HRES_COARSE], clock, ts);
else if (msk & VDSO_RAW)
- return do_hres(&vd[CS_RAW], clock, ts);
+ vd = &vd[CS_RAW];
+ else
+ return -1;
- return -1;
+ return do_hres(vd, clock, ts);
}
static __maybe_unused int
--
2.35.1
Powered by blists - more mailing lists