lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 14 Aug 2023 12:51:53 -0700
From:   Josh Poimboeuf <jpoimboe@...nel.org>
To:     Borislav Petkov <bp@...en8.de>
Cc:     Peter Zijlstra <peterz@...radead.org>, x86@...nel.org,
        linux-kernel@...r.kernel.org, David.Kaplan@....com,
        Andrew.Cooper3@...rix.com, gregkh@...uxfoundation.org,
        nik.borisov@...e.com
Subject: Re: [PATCH v2 00/11] Fix up SRSO stuff

On Mon, Aug 14, 2023 at 06:44:47PM +0200, Borislav Petkov wrote:
> On Mon, Aug 14, 2023 at 01:44:26PM +0200, Peter Zijlstra wrote:
> > The one open techinical issue I have with the mitigation is the alignment of
> > the RET inside srso_safe_ret(). The details given for retbleed stated that RET
> > should be on a 64byte boundary, which is not the case here.
> 
> I have written this in the hope to make this more clear:
> 
> /*
>  * Some generic notes on the untraining sequences:
>  *
>  * They are interchangeable when it comes to flushing potentially wrong
>  * RET predictions from the BTB.
>  *
>  * The SRSO Zen1/2 (MOVABS) untraining sequence is longer than the
>  * Retbleed sequence because the return sequence done there
>  * (srso_safe_ret()) is longer and the return sequence must fully nest
>  * (end before) the untraining sequence. Therefore, the untraining
>  * sequence must overlap the return sequence.
>  *
>  * Regarding alignment - the instructions which need to be untrained,
>  * must all start at a cacheline boundary for Zen1/2 generations. That
>  * is, both the ret in zen_untrain_ret() and srso_safe_ret() in the
>  * srso_untrain_ret() must both be placed at the beginning of
>  * a cacheline.
>  */

It's a good comment, but RET in srso_safe_ret() is still misaligned.
Don't we need something like so?

diff --git a/arch/x86/lib/retpoline.S b/arch/x86/lib/retpoline.S
index 9bc19deacad1..373ac128a30a 100644
--- a/arch/x86/lib/retpoline.S
+++ b/arch/x86/lib/retpoline.S
@@ -251,13 +251,14 @@ __EXPORT_THUNK(retbleed_untrain_ret)
  * thus a "safe" one to use.
  */
 	.align 64
-	.skip 64 - (srso_safe_ret - srso_untrain_ret), 0xcc
+	.skip 64 - (.Lsrso_ret - srso_untrain_ret), 0xcc
 SYM_START(srso_untrain_ret, SYM_L_GLOBAL, SYM_A_NONE)
 	ANNOTATE_NOENDBR
 	.byte 0x48, 0xb8
 
 SYM_INNER_LABEL(srso_safe_ret, SYM_L_GLOBAL)
 	lea 8(%_ASM_SP), %_ASM_SP
+.Lsrso_ret:
 	ret
 	int3
 	int3

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ