[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220809175510.849644425@linuxfoundation.org>
Date:   Tue,  9 Aug 2022 20:00:33 +0200
From:   Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To:     linux-kernel@...r.kernel.org
Cc:     Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
        stable@...r.kernel.org, Andrew Cooper <andrew.cooper3@...rix.com>,
        Pawan Gupta <pawan.kumar.gupta@...ux.intel.com>,
        Borislav Petkov <bp@...e.de>
Subject: [PATCH 5.4 15/15] x86/speculation: Add LFENCE to RSB fill sequence
From: Pawan Gupta <pawan.kumar.gupta@...ux.intel.com>
commit ba6e31af2be96c4d0536f2152ed6f7b6c11bca47 upstream.
RSB fill sequence does not have any protection for miss-prediction of
conditional branch at the end of the sequence. CPU can speculatively
execute code immediately after the sequence, while RSB filling hasn't
completed yet.
  #define __FILL_RETURN_BUFFER(reg, nr, sp)	\
  	mov	$(nr/2), reg;			\
  771:						\
  	call	772f;				\
  773:	/* speculation trap */			\
  	pause;					\
  	lfence;					\
  	jmp	773b;				\
  772:						\
  	call	774f;				\
  775:	/* speculation trap */			\
  	pause;					\
  	lfence;					\
  	jmp	775b;				\
  774:						\
  	dec	reg;				\
  	jnz	771b;  <----- CPU can miss-predict here.				\
  	add	$(BITS_PER_LONG/8) * nr, sp;
Before RSB is filled, RETs that come in program order after this macro
can be executed speculatively, making them vulnerable to RSB-based
attacks.
Mitigate it by adding an LFENCE after the conditional branch to prevent
speculation while RSB is being filled.
Suggested-by: Andrew Cooper <andrew.cooper3@...rix.com>
Signed-off-by: Pawan Gupta <pawan.kumar.gupta@...ux.intel.com>
Signed-off-by: Borislav Petkov <bp@...e.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
 arch/x86/include/asm/nospec-branch.h |    4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)
--- a/arch/x86/include/asm/nospec-branch.h
+++ b/arch/x86/include/asm/nospec-branch.h
@@ -61,7 +61,9 @@
 774:						\
 	dec	reg;				\
 	jnz	771b;				\
-	add	$(BITS_PER_LONG/8) * nr, sp;
+	add	$(BITS_PER_LONG/8) * nr, sp;	\
+	/* barrier for jnz misprediction */	\
+	lfence;
 
 #define __ISSUE_UNBALANCED_RET_GUARD(sp)	\
 	call	881f;				\
Powered by blists - more mailing lists
 
