[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20170703133420.051480668@linuxfoundation.org>
Date: Mon, 3 Jul 2017 15:35:03 +0200
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org, Heiko Carstens <heiko.carstens@...ibm.com>,
Martin Schwidefsky <schwidefsky@...ibm.com>,
Sasha Levin <alexander.levin@...izon.com>
Subject: [PATCH 4.9 123/172] s390/ctl_reg: make __ctl_load a full memory barrier
4.9-stable review patch. If anyone has any objections, please let me know.
------------------
From: Heiko Carstens <heiko.carstens@...ibm.com>
[ Upstream commit e991c24d68b8c0ba297eeb7af80b1e398e98c33f ]
We have quite a lot of code that depends on the order of the
__ctl_load inline assemby and subsequent memory accesses, like
e.g. disabling lowcore protection and the writing to lowcore.
Since the __ctl_load macro does not have memory barrier semantics, nor
any other dependencies the compiler is, theoretically, free to shuffle
code around. Or in other words: storing to lowcore could happen before
lowcore protection is disabled.
In order to avoid this class of potential bugs simply add a full
memory barrier to the __ctl_load macro.
Signed-off-by: Heiko Carstens <heiko.carstens@...ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@...ibm.com>
Signed-off-by: Sasha Levin <alexander.levin@...izon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
arch/s390/include/asm/ctl_reg.h | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
--- a/arch/s390/include/asm/ctl_reg.h
+++ b/arch/s390/include/asm/ctl_reg.h
@@ -15,7 +15,9 @@
BUILD_BUG_ON(sizeof(addrtype) != (high - low + 1) * sizeof(long));\
asm volatile( \
" lctlg %1,%2,%0\n" \
- : : "Q" (*(addrtype *)(&array)), "i" (low), "i" (high));\
+ : \
+ : "Q" (*(addrtype *)(&array)), "i" (low), "i" (high) \
+ : "memory"); \
}
#define __ctl_store(array, low, high) { \
Powered by blists - more mailing lists