[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230809072200.646688083@infradead.org>
Date: Wed, 09 Aug 2023 09:12:21 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: x86@...nel.org
Cc: linux-kernel@...r.kernel.org, peterz@...radead.org,
David.Kaplan@....com, Andrew.Cooper3@...rix.com,
jpoimboe@...nel.org, gregkh@...uxfoundation.org
Subject: [RFC][PATCH 03/17] x86/cpu: Make srso_untrain_ret consistent
This does change srso_untrain_ret a little to be more consistent with
srso_alias_untrain_ret (and zen_untrain_ret). Specifically I made
srso_untrain_ret tail-call the srso_return_thunk, instead of doing the
call directly. This matches how srso_alias_untrain_ret amd
zen_untrain_ret also tail-call their respective return_thunk.
If this is a problem this can be easily fixed and a comment added to
explain -- but this way they all end with a tail-call to their own
return-thunk, which is nice and consistent.
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
---
arch/x86/lib/retpoline.S | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
--- a/arch/x86/lib/retpoline.S
+++ b/arch/x86/lib/retpoline.S
@@ -262,7 +262,7 @@ SYM_INNER_LABEL(srso_safe_ret, SYM_L_GLO
int3
/* end of movabs */
lfence
- call srso_safe_ret
+ jmp srso_return_thunk
int3
SYM_CODE_END(srso_safe_ret)
SYM_FUNC_END(srso_untrain_ret)
Powered by blists - more mailing lists