lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20170612152521.720121897@linuxfoundation.org>
Date:   Mon, 12 Jun 2017 17:24:20 +0200
From:   Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To:     linux-kernel@...r.kernel.org
Cc:     Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
        stable@...r.kernel.org, Alexander Graf <agraf@...e.de>,
        Marc Zyngier <marc.zyngier@....com>,
        Christoffer Dall <cdall@...aro.org>
Subject: [PATCH 4.11 053/150] arm64: KVM: Allow unaligned accesses at EL2

4.11-stable review patch.  If anyone has any objections, please let me know.

------------------

From: Marc Zyngier <marc.zyngier@....com>

commit 78fd6dcf11468a5a131b8365580d0c613bcc02cb upstream.

We currently have the SCTLR_EL2.A bit set, trapping unaligned accesses
at EL2, but we're not really prepared to deal with it. So far, this
has been unnoticed, until GCC 7 started emitting those (in particular
64bit writes on a 32bit boundary).

Since the rest of the kernel is pretty happy about that, let's follow
its example and set SCTLR_EL2.A to zero. Modern CPUs don't really
care.

Reported-by: Alexander Graf <agraf@...e.de>
Signed-off-by: Marc Zyngier <marc.zyngier@....com>
Signed-off-by: Christoffer Dall <cdall@...aro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>

---
 arch/arm64/kvm/hyp-init.S |    5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

--- a/arch/arm64/kvm/hyp-init.S
+++ b/arch/arm64/kvm/hyp-init.S
@@ -104,9 +104,10 @@ __do_hyp_init:
 
 	/*
 	 * Preserve all the RES1 bits while setting the default flags,
-	 * as well as the EE bit on BE.
+	 * as well as the EE bit on BE. Drop the A flag since the compiler
+	 * is allowed to generate unaligned accesses.
 	 */
-	ldr	x4, =(SCTLR_EL2_RES1 | SCTLR_ELx_FLAGS)
+	ldr	x4, =(SCTLR_EL2_RES1 | (SCTLR_ELx_FLAGS & ~SCTLR_ELx_A))
 CPU_BE(	orr	x4, x4, #SCTLR_ELx_EE)
 	msr	sctlr_el2, x4
 	isb


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ