[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZQvqvpSbyub6gFZX@gmail.com>
Date: Thu, 21 Sep 2023 09:03:26 +0200
From: Ingo Molnar <mingo@...nel.org>
To: Yi Sun <yi.sun@...el.com>
Cc: dave.hansen@...el.com, linux-kernel@...r.kernel.org,
x86@...nel.org, sohil.mehta@...el.com, ak@...ux.intel.com,
ilpo.jarvinen@...ux.intel.com, heng.su@...el.com,
tony.luck@...el.com, yi.sun@...ux.intel.com, yu.c.chen@...el.com
Subject: Re: [PATCH v7 1/3] x86/fpu: Measure the Latency of XSAVE and XRSTOR
* Yi Sun <yi.sun@...el.com> wrote:
> @@ -113,7 +116,7 @@ static inline u64 xfeatures_mask_independent(void)
> * original instruction which gets replaced. We need to use it here as the
> * address of the instruction where we might get an exception at.
> */
> -#define XSTATE_XSAVE(st, lmask, hmask, err) \
> +#define __XSTATE_XSAVE(st, lmask, hmask, err) \
> asm volatile(ALTERNATIVE_3(XSAVE, \
> XSAVEOPT, X86_FEATURE_XSAVEOPT, \
> XSAVEC, X86_FEATURE_XSAVEC, \
> @@ -130,7 +133,7 @@ static inline u64 xfeatures_mask_independent(void)
> * Use XRSTORS to restore context if it is enabled. XRSTORS supports compact
> * XSAVE area format.
> */
> -#define XSTATE_XRESTORE(st, lmask, hmask) \
> +#define __XSTATE_XRESTORE(st, lmask, hmask) \
> asm volatile(ALTERNATIVE(XRSTOR, \
> XRSTORS, X86_FEATURE_XSAVES) \
> "\n" \
> @@ -140,6 +143,35 @@ static inline u64 xfeatures_mask_independent(void)
> : "D" (st), "m" (*st), "a" (lmask), "d" (hmask) \
> : "memory")
>
> +#if defined(CONFIG_X86_DEBUG_FPU)
> +#define XSTATE_XSAVE(fps, lmask, hmask, err) \
> + do { \
> + struct fpstate *f = fps; \
> + u64 tc = -1; \
> + if (tracepoint_enabled(x86_fpu_latency_xsave)) \
> + tc = trace_clock(); \
> + __XSTATE_XSAVE(&f->regs.xsave, lmask, hmask, err); \
> + if (tracepoint_enabled(x86_fpu_latency_xsave)) \
> + trace_x86_fpu_latency_xsave(f, trace_clock() - tc);\
> + } while (0)
> +
> +#define XSTATE_XRESTORE(fps, lmask, hmask) \
> + do { \
> + struct fpstate *f = fps; \
> + u64 tc = -1; \
> + if (tracepoint_enabled(x86_fpu_latency_xrstor)) \
> + tc = trace_clock(); \
> + __XSTATE_XRESTORE(&f->regs.xsave, lmask, hmask); \
> + if (tracepoint_enabled(x86_fpu_latency_xrstor)) \
> + trace_x86_fpu_latency_xrstor(f, trace_clock() - tc);\
This v7 version does not adequately address the review feedback I gave for
v6: it adds tracing overhead to potential hot paths, and putting it behind
CONFIG_X86_DEBUG_FPU is not a solution either: it's default-y, so de-facto
enabled on all major distros...
It seems unnecessarily complex: why does it have to measure latency
directly? Tracepoints *by default* come with event timestamps. A latency
measurement tool should be able to subtract two timestamps to extract the
latency between two tracepoints...
In fact, function tracing is enabled on all major Linux distros:
kepler:~/tip> grep FUNCTION_TRACER /boot/config-6.2.0-33-generic
CONFIG_HAVE_FUNCTION_TRACER=y
CONFIG_FUNCTION_TRACER=y
Why not just enable function tracing for the affected FPU context switching
functions?
The relevant functions are already standalone in a typical kernel:
xsave: # ffffffff8103cfe0 T save_fpregs_to_fpstate
xrstor: # ffffffff8103d160 T restore_fpregs_from_fpstate
xrstor_supervisor: # ffffffff8103dc50 T fpu__clear_user_states
... and FPU context switching overhead dominates the cost of these
functions.
Thanks,
Ingo
Powered by blists - more mailing lists