[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20180612204948.4752-1-dima@arista.com>
Date: Tue, 12 Jun 2018 21:49:48 +0100
From: Dmitry Safonov <dima@...sta.com>
To: linux-kernel@...r.kernel.org
Cc: Dmitry Safonov <dima@...sta.com>,
Andy Lutomirski <luto@...capital.net>,
Borislav Petkov <bp@...en8.de>,
Dmitry Safonov <0x7f454c46@...il.com>,
"H. Peter Anvin" <hpa@...or.com>, Ingo Molnar <mingo@...hat.com>,
"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
Thomas Gleixner <tglx@...utronix.de>,
Vasiliy Khoruzhick <vasilykh@...sta.com>, x86@...nel.org
Subject: [RFC] x86/vdso: Align vdso after searching for free area
There is errata for AMD family 15h CPUs [1] and since
commit dfb09f9b7ab03 ("x86, amd: Avoid cache aliasing penalties on AMD
family 15h") bits [14:12] are being cleared for shared libraries.
Also per-boot ASLR applies over upper bits by OR directly over the
address.
As we need special alignment and lower bits values to be set, it makes
only a little sense to call get_unmapped_area() after calculating the
address. It also can lead to random crashes if get_unmapped_area()
actually changes/aligns the address, which we observed on 15h CPU.
Usually it's not a problem as there isn't many mappings (except possibly
ld.so, uprobes?) and result address is the same before/after
get_unmapped_area().
Move align_vdso_addr() after get_unmapped_area() to make sure that
errata for AMD 15h is always applied.
[1]: https://developer.amd.com/wordpress/media/2012/10/SharedL1InstructionCacheonAMD15hCPU.pdf
Cc: Andy Lutomirski <luto@...capital.net>
Cc: Borislav Petkov <bp@...en8.de>
Cc: Dmitry Safonov <0x7f454c46@...il.com>
Cc: "H. Peter Anvin" <hpa@...or.com>
Cc: Ingo Molnar <mingo@...hat.com>
Cc: "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>
Cc: Thomas Gleixner <tglx@...utronix.de>
Cc: Vasiliy Khoruzhick <vasilykh@...sta.com>
Cc: x86@...nel.org
Signed-off-by: Dmitry Safonov <dima@...sta.com>
---
arch/x86/entry/vdso/vma.c | 18 ++++++++++--------
arch/x86/include/asm/elf.h | 1 +
arch/x86/kernel/sys_x86_64.c | 2 +-
3 files changed, 12 insertions(+), 9 deletions(-)
diff --git a/arch/x86/entry/vdso/vma.c b/arch/x86/entry/vdso/vma.c
index 5b8b556dbb12..862f0cce3ee6 100644
--- a/arch/x86/entry/vdso/vma.c
+++ b/arch/x86/entry/vdso/vma.c
@@ -154,18 +154,26 @@ static int map_vdso(const struct vdso_image *image, unsigned long addr)
struct mm_struct *mm = current->mm;
struct vm_area_struct *vma;
unsigned long text_start;
+ unsigned long area_size = image->size - image->sym_vvar_start;
int ret = 0;
if (down_write_killable(&mm->mmap_sem))
return -EINTR;
- addr = get_unmapped_area(NULL, addr,
- image->size - image->sym_vvar_start, 0, 0);
+ /* Find a bigger place for vma then needed - to align vdso later. */
+ area_size += get_align_mask();
+ addr = get_unmapped_area(NULL, addr, area_size, 0, 0);
if (IS_ERR_VALUE(addr)) {
ret = addr;
goto up_fail;
}
+ /*
+ * Forcibly align the final address in case we have a hardware
+ * issue that requires alignment for performance reasons.
+ */
+ addr = align_vdso_addr(addr);
+
text_start = addr - image->sym_vvar_start;
/*
@@ -239,12 +247,6 @@ static unsigned long vdso_addr(unsigned long start, unsigned len)
addr = start;
}
- /*
- * Forcibly align the final address in case we have a hardware
- * issue that requires alignment for performance reasons.
- */
- addr = align_vdso_addr(addr);
-
return addr;
}
diff --git a/arch/x86/include/asm/elf.h b/arch/x86/include/asm/elf.h
index 0d157d2a1e2a..88aa49294c9c 100644
--- a/arch/x86/include/asm/elf.h
+++ b/arch/x86/include/asm/elf.h
@@ -382,4 +382,5 @@ struct va_alignment {
extern struct va_alignment va_align;
extern unsigned long align_vdso_addr(unsigned long);
+extern unsigned long get_align_mask(void);
#endif /* _ASM_X86_ELF_H */
diff --git a/arch/x86/kernel/sys_x86_64.c b/arch/x86/kernel/sys_x86_64.c
index 6a78d4b36a79..4dd74c6f236d 100644
--- a/arch/x86/kernel/sys_x86_64.c
+++ b/arch/x86/kernel/sys_x86_64.c
@@ -27,7 +27,7 @@
/*
* Align a virtual address to avoid aliasing in the I$ on AMD F15h.
*/
-static unsigned long get_align_mask(void)
+unsigned long get_align_mask(void)
{
/* handle 32- and 64-bit case with a single conditional */
if (va_align.flags < 0 || !(va_align.flags & (2 - mmap_is_ia32())))
--
2.13.6
Powered by blists - more mailing lists