lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <1472032539-30256-1-git-send-email-mszeredi@redhat.com>
Date:   Wed, 24 Aug 2016 11:55:39 +0200
From:   Miklos Szeredi <mszeredi@...hat.com>
To:     linux-fsdevel@...r.kernel.org
Cc:     linux-kernel@...r.kernel.org, Al Viro <viro@...iv.linux.org.uk>,
        Jan Kara <jack@...e.cz>,
        Lino Sanfilippo <LinoSanfilippo@....de>,
        Eric Paris <eparis@...hat.com>
Subject: [PATCH] fanotify: fix race between fanotify_release() and fanotify_get_response()

List corruption was reported with a fanotify stress test.

The bug turned out to be due to fsnotify_remove_event() being called on an
event on the fanotify_data.access_list and protected by
fanotify_data.access_lock instead of notification_mutex.  This resulted in
list_del_init() being run concurrently on the same list entry.

This was introduced by commit 09e5f14e57c7 ("fanotify: on group destroy
allow all waiters to bypass permission check") which made
fanotify_get_response() flush out events when bypass_perm was set.  The
flush doesn't normally happen, since the wake_up() is called after the
access_list was cleaned in fsnotify_release().  But the two are not
synchronized, the fanotify_get_response() could still be processing a
previous wakeup by the time bypass_perm is true.  This was seen in the
crashdumps in the report.

This bug can be solved multiple ways, maybe the simplest is moving the
bypass_perm setting after the list has been processed.

In theory there's also a memory ordering problem here.  atomic_inc() in
itself doesn't imply a memory barrier, and spin_unlock() is a semi
permeable barrier, so we need an explicit memory barrier so that the
condition is precieved after the list is cleared.

Similarly we need barriers for the case when event->response is set
(i.e. non zero): fsnotify_destroy_event() might destroy the event while
it's still on the access_list, since nothing guarantees that the storing
the response value in event->response will be preceived after the list
manipulation.  So add the necessary barriers there as well.

PS not sure why bypass_perm is an atomic_t, it could just as well be a
boolean flag.

PPS all this subtlety could be removed if the waitq was per-event, which
would also allow better performance.

Signed-off-by: Miklos Szeredi <mszeredi@...hat.com>
Fixes: 09e5f14e57c7 ("fanotify: on group destroy allow all waiters to bypass permission check")
Cc: <stable@...r.kernel.org> #v2.6.37+
Cc: Jan Kara <jack@...e.cz>
Cc: Lino Sanfilippo <LinoSanfilippo@....de>
Cc: Eric Paris <eparis@...hat.com>
---
 fs/notify/fanotify/fanotify.c      |  5 +++++
 fs/notify/fanotify/fanotify_user.c | 25 +++++++++++++++++++++----
 2 files changed, 26 insertions(+), 4 deletions(-)

diff --git a/fs/notify/fanotify/fanotify.c b/fs/notify/fanotify/fanotify.c
index d2f97ecca6a5..0d0cabd946e0 100644
--- a/fs/notify/fanotify/fanotify.c
+++ b/fs/notify/fanotify/fanotify.c
@@ -70,6 +70,11 @@ static int fanotify_get_response(struct fsnotify_group *group,
 	wait_event(group->fanotify_data.access_waitq, event->response ||
 				atomic_read(&group->fanotify_data.bypass_perm));
 
+	/*
+	 * Pairs with smp_wmb() before storing event->response.  This makes sure
+	 * that the list_del_init() done on the event is preceived after this.
+	 */
+	smp_rmb();
 	if (!event->response) {	/* bypass_perm set */
 		/*
 		 * Event was canceled because group is being destroyed. Remove
diff --git a/fs/notify/fanotify/fanotify_user.c b/fs/notify/fanotify/fanotify_user.c
index 8e8e6bcd1d43..af57e75772a0 100644
--- a/fs/notify/fanotify/fanotify_user.c
+++ b/fs/notify/fanotify/fanotify_user.c
@@ -193,6 +193,10 @@ static int process_access_response(struct fsnotify_group *group,
 	if (!event)
 		return -ENOENT;
 
+	/*
+	 * Make sure the dequeue is preceived before the store of "response"
+	 */
+	smp_wmb();
 	event->response = response;
 	wake_up(&group->fanotify_data.access_waitq);
 
@@ -305,6 +309,11 @@ static ssize_t fanotify_read(struct file *file, char __user *buf,
 		} else {
 #ifdef CONFIG_FANOTIFY_ACCESS_PERMISSIONS
 			if (ret < 0) {
+				/*
+				 * Make sure the dequeue is preceived before
+				 * the store of "response"
+				 */
+				smp_wmb();
 				FANOTIFY_PE(kevent)->response = FAN_DENY;
 				wake_up(&group->fanotify_data.access_waitq);
 				break;
@@ -365,26 +374,34 @@ static int fanotify_release(struct inode *ignored, struct file *file)
 	 * enter or leave access_list by now.
 	 */
 	spin_lock(&group->fanotify_data.access_lock);
-
-	atomic_inc(&group->fanotify_data.bypass_perm);
-
 	list_for_each_entry_safe(event, next, &group->fanotify_data.access_list,
 				 fae.fse.list) {
 		pr_debug("%s: found group=%p event=%p\n", __func__, group,
 			 event);
 
 		list_del_init(&event->fae.fse.list);
+		/*
+		 * Make sure the dequeue is preceived before the store of
+		 * "response"
+		 */
+		smp_wmb();
 		event->response = FAN_ALLOW;
 	}
 	spin_unlock(&group->fanotify_data.access_lock);
 
 	/*
-	 * Since bypass_perm is set, newly queued events will not wait for
+	 * After bypass_perm is set, newly queued events will not wait for
 	 * access response. Wake up the already sleeping ones now.
+	 *
+	 * Make sure we do this only *after* all events were taken off
+	 * group->fanotify_data.access_list, otherwise the entry might be
+	 * deleted concurrently by two entities, resulting in list corruption.
+	 *
 	 * synchronize_srcu() in fsnotify_destroy_group() will wait for all
 	 * processes sleeping in fanotify_handle_event() waiting for access
 	 * response and thus also for all permission events to be freed.
 	 */
+	atomic_inc(&group->fanotify_data.bypass_perm);
 	wake_up(&group->fanotify_data.access_waitq);
 #endif
 
-- 
2.5.5

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ