lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20171223002505.593-2-aarcange@redhat.com>
Date:   Sat, 23 Dec 2017 01:25:05 +0100
From:   Andrea Arcangeli <aarcange@...hat.com>
To:     Andrew Morton <akpm@...ux-foundation.org>,
        Eric Biggers <ebiggers3@...il.com>
Cc:     Mike Rapoport <rppt@...ux.vnet.ibm.com>,
        linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org,
        viro@...iv.linux.org.uk, linux-mm@...ck.org,
        syzkaller-bugs@...glegroups.com
Subject: [PATCH 1/1] userfaultfd: clear the vma->vm_userfaultfd_ctx if UFFD_EVENT_FORK fails

The previous fix 384632e67e0829deb8015ee6ad916b180049d252 corrected
the refcounting in case of UFFD_EVENT_FORK failure for the fork
userfault paths. That still didn't clear the vma->vm_userfaultfd_ctx
of the vmas that were set to point to the aborted new uffd ctx earlier
in dup_userfaultfd.

Cc: stable@...r.kernel.org
Signed-off-by: Andrea Arcangeli <aarcange@...hat.com>
---
 fs/userfaultfd.c | 20 ++++++++++++++++++--
 1 file changed, 18 insertions(+), 2 deletions(-)

diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c
index 896f810b6a06..1a88916455bd 100644
--- a/fs/userfaultfd.c
+++ b/fs/userfaultfd.c
@@ -591,11 +591,14 @@ int handle_userfault(struct vm_fault *vmf, unsigned long reason)
 static void userfaultfd_event_wait_completion(struct userfaultfd_ctx *ctx,
 					      struct userfaultfd_wait_queue *ewq)
 {
+	struct userfaultfd_ctx *release_new_ctx;
+
 	if (WARN_ON_ONCE(current->flags & PF_EXITING))
 		goto out;
 
 	ewq->ctx = ctx;
 	init_waitqueue_entry(&ewq->wq, current);
+	release_new_ctx = NULL;
 
 	spin_lock(&ctx->event_wqh.lock);
 	/*
@@ -622,8 +625,7 @@ static void userfaultfd_event_wait_completion(struct userfaultfd_ctx *ctx,
 				new = (struct userfaultfd_ctx *)
 					(unsigned long)
 					ewq->msg.arg.reserved.reserved1;
-
-				userfaultfd_ctx_put(new);
+				release_new_ctx = new;
 			}
 			break;
 		}
@@ -638,6 +640,20 @@ static void userfaultfd_event_wait_completion(struct userfaultfd_ctx *ctx,
 	__set_current_state(TASK_RUNNING);
 	spin_unlock(&ctx->event_wqh.lock);
 
+	if (release_new_ctx) {
+		struct vm_area_struct *vma;
+		struct mm_struct *mm = release_new_ctx->mm;
+
+		/* the various vma->vm_userfaultfd_ctx still points to it */
+		down_write(&mm->mmap_sem);
+		for (vma = mm->mmap; vma; vma = vma->vm_next)
+			if (vma->vm_userfaultfd_ctx.ctx == release_new_ctx)
+				vma->vm_userfaultfd_ctx = NULL_VM_UFFD_CTX;
+		up_write(&mm->mmap_sem);
+
+		userfaultfd_ctx_put(release_new_ctx);
+	}
+
 	/*
 	 * ctx may go away after this if the userfault pseudo fd is
 	 * already released.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ