lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu,  3 May 2018 13:32:42 -0700
From:   Davidlohr Bueso <dave@...olabs.net>
To:     akpm@...ux-foundation.org, aarcange@...hat.com
Cc:     joe.lawrence@...hat.com, gareth.evans@...textis.co.uk,
        linux-kernel@...r.kernel.org, linux-mm@...ck.org,
        dave@...olabs.net, stable@...nel.org,
        Davidlohr Bueso <dbueso@...e.de>
Subject: [PATCH 1/2] Revert "ipc/shm: Fix shmat mmap nil-page protection"

95e91b831f87 (ipc/shm: Fix shmat mmap nil-page protection) worked on
the idea that we should not be mapping as root addr=0 and MAP_FIXED.
However, it was reported that this scenario is in fact valid, thus
making the patch both bogus and breaks userspace as well. For example
X11's libint10.so relies on shmat(1, SHM_RND) for lowmem initialization[1].

[1] https://cgit.freedesktop.org/xorg/xserver/tree/hw/xfree86/os-support/linux/int10/linux.c#n347

Reported-by: Joe Lawrence <joe.lawrence@...hat.com>
Reported-by: Andrea Arcangeli <aarcange@...hat.com>
Signed-off-by: Davidlohr Bueso <dbueso@...e.de>
---
 ipc/shm.c | 9 ++-------
 1 file changed, 2 insertions(+), 7 deletions(-)

diff --git a/ipc/shm.c b/ipc/shm.c
index 0075990338f4..b81d53c8f459 100644
--- a/ipc/shm.c
+++ b/ipc/shm.c
@@ -1371,13 +1371,8 @@ long do_shmat(int shmid, char __user *shmaddr, int shmflg,
 
 	if (addr) {
 		if (addr & (shmlba - 1)) {
-			/*
-			 * Round down to the nearest multiple of shmlba.
-			 * For sane do_mmap_pgoff() parameters, avoid
-			 * round downs that trigger nil-page and MAP_FIXED.
-			 */
-			if ((shmflg & SHM_RND) && addr >= shmlba)
-				addr &= ~(shmlba - 1);
+			if (shmflg & SHM_RND)
+				addr &= ~(shmlba - 1);  /* round down */
 			else
 #ifndef __ARCH_FORCE_SHMLBA
 				if (addr & ~PAGE_MASK)
-- 
2.13.6

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ