lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Message-ID: <20160216224528.6ee7043d@annuminas.surriel.com>
Date:	Tue, 16 Feb 2016 22:45:28 -0500
From:	Rik van Riel <riel@...hat.com>
To:	linux-kernel@...r.kernel.org
Cc:	prarit@...hat.com, lwoodman@...hat.com, wmealing@...hat.com
Subject: [PATCH RHEL6.8] x86/mm: Improve AMD Bulldozer ASLR workaround

Fixes bug 1240883

Brew build: http://brewweb.devel.redhat.com/brew/taskinfo?taskID=10506428

RHEL6: code changed around from upstream so the address transformations
       happen in the RHEL6 code flow.

Tested on amd-pike-08.klab.eng.bos.redhat.com

commit 4e26d11f52684dc8b1632a8cfe450cb5197a8464
Author: Hector Marco-Gisbert <hecmargi@....es>
Date:   Fri Mar 27 12:38:21 2015 +0100

    x86/mm: Improve AMD Bulldozer ASLR workaround
    
    The ASLR implementation needs to special-case AMD F15h processors by
    clearing out bits [14:12] of the virtual address in order to avoid I$
    cross invalidations and thus performance penalty for certain workloads.
    For details, see:
    
      dfb09f9b7ab0 ("x86, amd: Avoid cache aliasing penalties on AMD family 15h")
    
    This special case reduces the mmapped file's entropy by 3 bits.
    
    The following output is the run on an AMD Opteron 62xx class CPU
    processor under x86_64 Linux 4.0.0:
    
      $ for i in `seq 1 10`; do cat /proc/self/maps | grep "r-xp.*libc" ; done
      b7588000-b7736000 r-xp 00000000 00:01 4924       /lib/i386-linux-gnu/libc.so.6
      b7570000-b771e000 r-xp 00000000 00:01 4924       /lib/i386-linux-gnu/libc.so.6
      b75d0000-b777e000 r-xp 00000000 00:01 4924       /lib/i386-linux-gnu/libc.so.6
      b75b0000-b775e000 r-xp 00000000 00:01 4924       /lib/i386-linux-gnu/libc.so.6
      b7578000-b7726000 r-xp 00000000 00:01 4924       /lib/i386-linux-gnu/libc.so.6
      ...
    
    Bits [12:14] are always 0, i.e. the address always ends in 0x8000 or
    0x0000.
    
    32-bit systems, as in the example above, are especially sensitive
    to this issue because 32-bit randomness for VA space is 8 bits (see
    mmap_rnd()). With the Bulldozer special case, this diminishes to only 32
    different slots of mmap virtual addresses.
    
    This patch randomizes per boot the three affected bits rather than
    setting them to zero. Since all the shared pages have the same value
    at bits [12..14], there is no cache aliasing problems. This value gets
    generated during system boot and it is thus not known to a potential
    remote attacker. Therefore, the impact from the Bulldozer workaround
    gets diminished and ASLR randomness increased.
    
    More details at:
    
      http://hmarco.org/bugs/AMD-Bulldozer-linux-ASLR-weakness-reducing-mmaped-files-by-eight.html
    
    Original white paper by AMD dealing with the issue:
    
      http://developer.amd.com/wordpress/media/2012/10/SharedL1InstructionCacheonAMD15hCPU.pdf
    
    Mentored-by: Ismael Ripoll <iripoll@...ca.upv.es>
    Signed-off-by: Hector Marco-Gisbert <hecmargi@....es>
    Signed-off-by: Borislav Petkov <bp@...e.de>
    Acked-by: Kees Cook <keescook@...omium.org>
    Cc: Alexander Viro <viro@...iv.linux.org.uk>
    Cc: Andrew Morton <akpm@...ux-foundation.org>
    Cc: H. Peter Anvin <hpa@...or.com>
    Cc: Jan-Simon <dl9pf@....de>
    Cc: Thomas Gleixner <tglx@...utronix.de>
    Cc: linux-fsdevel@...r.kernel.org
    Link: http://lkml.kernel.org/r/1427456301-3764-1-git-send-email-hecmargi@upv.es
    Signed-off-by: Ingo Molnar <mingo@...nel.org>

---
 arch/x86/include/asm/elf.h   | 1 +
 arch/x86/kernel/cpu/amd.c    | 4 ++++
 arch/x86/kernel/sys_x86_64.c | 7 +++++++
 3 files changed, 12 insertions(+)

diff --git a/arch/x86/include/asm/elf.h b/arch/x86/include/asm/elf.h
index a84bcac4fd5c..32293660f016 100644
--- a/arch/x86/include/asm/elf.h
+++ b/arch/x86/include/asm/elf.h
@@ -367,6 +367,7 @@ enum align_flags {
 struct va_alignment {
 	int flags;
 	unsigned long mask;
+	unsigned long bits;
 } ____cacheline_aligned;
 
 extern struct va_alignment va_align;
diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
index cb24605e2f7f..cf899a6d34e8 100644
--- a/arch/x86/kernel/cpu/amd.c
+++ b/arch/x86/kernel/cpu/amd.c
@@ -3,6 +3,7 @@
 #include <linux/mm.h>
 
 #include <linux/io.h>
+#include <linux/random.h>
 #include <asm/processor.h>
 #include <asm/apic.h>
 #include <asm/cpu.h>
@@ -409,6 +410,9 @@ static void __cpuinit bsp_init_amd(struct cpuinfo_x86 *c)
 
 		va_align.mask     = (upperbit - 1) & PAGE_MASK;
 		va_align.flags    = ALIGN_VA_32 | ALIGN_VA_64;
+
+		/* A random value per boot for bit slice [12:upper_bit) */
+		va_align.bits = get_random_int() & va_align.mask;
 	}
 }
 
diff --git a/arch/x86/kernel/sys_x86_64.c b/arch/x86/kernel/sys_x86_64.c
index 261c75dd402f..7176e2ff17e2 100644
--- a/arch/x86/kernel/sys_x86_64.c
+++ b/arch/x86/kernel/sys_x86_64.c
@@ -22,6 +22,10 @@
 
 /*
  * Align a virtual address to avoid aliasing in the I$ on AMD F15h.
+ * The bits defined by the va_align.bits, [12:upper_bit), are set to
+ * a random value instead of zeroing them. This random value is
+ * computed once per boot. This form of ASLR is known as "per-boot
+ * ASLR".
  *
  * @flags denotes the allocation direction - bottomup or topdown -
  * or vDSO; see call sites below.
@@ -49,8 +53,11 @@ unsigned long align_addr(unsigned long addr, struct file *filp,
 	 */
 	if (!(flags & ALIGN_TOPDOWN))
 		tmp_addr += va_align.mask;
+	else
+		tmp_addr -= va_align.mask;
 
 	tmp_addr &= ~va_align.mask;
+	tmp_addr |= va_align.bits;
 
 	return tmp_addr;
 }

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ