lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87pn8rokjz.fsf@nanos.tec.linutronix.de>
Date:   Sun, 19 Jul 2020 12:39:44 +0200
From:   Thomas Gleixner <tglx@...utronix.de>
To:     Arvind Sankar <nivedita@...m.mit.edu>, hpa@...or.com
Cc:     Andy Lutomirski <luto@...capital.net>,
        Joerg Roedel <joro@...tes.org>, Ingo Molnar <mingo@...hat.com>,
        Borislav Petkov <bp@...en8.de>, x86@...nel.org,
        Andy Lutomirski <luto@...nel.org>,
        Peter Zijlstra <peterz@...radead.org>,
        Joerg Roedel <jroedel@...e.de>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] x86/idt: Make sure idt_table takes a whole page

Arvind Sankar <nivedita@...m.mit.edu> writes:
> To repeat the commit message, the problem is not misaligned
> bss..page_aligned objects, but symbols in _other_ bss sections, which
> can get allocated in the last page of bss..page_aligned, because its end
> isn't page-aligned (maybe it should be?)

That's the real and underlying problem.

> Given that this IDT's page is actually going to be mapped with different
> page protections, it seems like allocating the full page isn't
> unreasonable.

Wrong. The expectation of bss page aligned is that each object in that
section starts at a page boundary independent of its size.

Having the regular .bss objects which have no alignment requirements
start inside the bss aligned section if the last object there does not
have page size or a multiple of page size, is just hideous.

The right fix is trivial. See below.

Thanks,

        tglx
----
 arch/x86/kernel/vmlinux.lds.S     |    1 +
 include/asm-generic/vmlinux.lds.h |    1 +
 2 files changed, 2 insertions(+)

--- a/arch/x86/kernel/vmlinux.lds.S
+++ b/arch/x86/kernel/vmlinux.lds.S
@@ -358,6 +358,7 @@ SECTIONS
 	.bss : AT(ADDR(.bss) - LOAD_OFFSET) {
 		__bss_start = .;
 		*(.bss..page_aligned)
+		. = ALIGN(PAGE_SIZE);
 		*(BSS_MAIN)
 		BSS_DECRYPTED
 		. = ALIGN(PAGE_SIZE);
--- a/include/asm-generic/vmlinux.lds.h
+++ b/include/asm-generic/vmlinux.lds.h
@@ -738,6 +738,7 @@
 	.bss : AT(ADDR(.bss) - LOAD_OFFSET) {				\
 		BSS_FIRST_SECTIONS					\
 		*(.bss..page_aligned)					\
+		. = ALIGN(PAGE_SIZE);					\
 		*(.dynbss)						\
 		*(BSS_MAIN)						\
 		*(COMMON)						\

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ