lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <53301016.40902@cn.fujitsu.com>
Date:	Mon, 24 Mar 2014 18:59:34 +0800
From:	Gu Zheng <guz.fnst@...fujitsu.com>
To:	Benjamin LaHaise <bcrl@...ck.org>
CC:	Tang Chen <tangchen@...fujitsu.com>, Dave Jones <davej@...hat.com>,
	Al Viro <viro@...iv.linux.org.uk>, jmoyer@...hat.com,
	kosaki.motohiro@...fujitsu.com,
	KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
	Yasuaki Ishimatsu <isimatu.yasuaki@...fujitsu.com>,
	miaox@...fujitsu.com, linux-aio@...ck.org,
	fsdevel <linux-fsdevel@...r.kernel.org>,
	linux-kernel <linux-kernel@...r.kernel.org>,
	Andrew Morton <akpm@...ux-foundation.org>
Subject: [V2 PATCH 2/2] aio: fix the confliction of aio read events and aio
 migrate page

Since we do not have additional protection on the page at the read events
side, so it is possible that the read of the page takes place after the
page has been freed and allocated to another part of the kernel. This
would result in the read returning invalid information.
As a result, for example, we have the following problem:

            thread 1                      |              thread 2
                                          |
aio_migratepage()                         |
 |-> take ctx->completion_lock            |
 |-> migrate_page_copy(new, old)          |
 |   *NOW*, ctx->ring_pages[idx] == old   |
                                          |
                                          |    *NOW*, ctx->ring_pages[idx] == old
                                          |    aio_read_events_ring()
                                          |     |-> ring = kmap_atomic(ctx->ring_pages[0])
                                          |     |-> ring->head = head;          *HERE, write to the old ring page*
                                          |     |-> kunmap_atomic(ring);
                                          |
 |-> ctx->ring_pages[idx] = new           |
 |   *BUT NOW*, the content of            |
 |    ring_pages[idx] is old.             |
 |-> release ctx->completion_lock         |

As above, the new ring page will not be updated.

Fix this issue, as well as prevent races in aio_ring_setup() by taking
the ring_lock mutex and completion_lock during page migration and where
otherwise applicable.

Reported-by: Yasuaki Ishimatsu <isimatu.yasuaki@...fujitsu.com>
Signed-off-by: Tang Chen <tangchen@...fujitsu.com>
Signed-off-by: Gu Zheng <guz.fnst@...fujitsu.com>
---
v2:
Merged Tang Chen's patch to use the spin_lock to protect the ring buffer update.
Use ring_lock rather than the additional spin_lock as Benjamin LaHaise suggested.
---
 fs/aio.c |   23 ++++++++++++++++++++++-
 1 files changed, 22 insertions(+), 1 deletions(-)

diff --git a/fs/aio.c b/fs/aio.c
index 6453c12..ee74704 100644
--- a/fs/aio.c
+++ b/fs/aio.c
@@ -298,6 +298,9 @@ static int aio_migratepage(struct address_space *mapping, struct page *new,
 	/* Extra ref cnt for rind_pages[] array */
 	get_page(new);
 
+	/* Ensure no aio read events is going when migrating page */
+	mutex_lock(&ctx->ring_lock);
+
 	rc = migrate_page_move_mapping(mapping, new, old, NULL, mode, 1);
 	if (rc != MIGRATEPAGE_SUCCESS) {
 		put_page(new);
@@ -312,6 +315,8 @@ static int aio_migratepage(struct address_space *mapping, struct page *new,
 
 	put_page(old);
 
+	mutex_unlock(&ctx->ring_lock);
+
 	return rc;
 }
 #endif
@@ -523,9 +528,18 @@ static int ioctx_add_table(struct kioctx *ctx, struct mm_struct *mm)
 					rcu_read_unlock();
 					spin_unlock(&mm->ioctx_lock);
 
+					/*
+					* Accessing ring pages must be done
+					* holding ctx->completion_lock to
+					* prevent aio ring page migration
+					* procedure from migrating ring pages.
+					*/
+					spin_lock_irq(&ctx->completion_lock);
 					ring = kmap_atomic(ctx->ring_pages[0]);
 					ring->id = ctx->id;
 					kunmap_atomic(ring);
+					spin_unlock_irq(&ctx->completion_lock);
+
 					return 0;
 				}
 
@@ -624,7 +638,14 @@ static struct kioctx *ioctx_alloc(unsigned nr_events)
 	if (!ctx->cpu)
 		goto err;
 
-	if (aio_setup_ring(ctx) < 0)
+	/*
+	 * Prevent races with page migration in aio_setup_ring() by holding
+	 * the ring_lock mutex.
+	 */
+	mutex_lock(&ctx->ring_lock);
+	err = aio_setup_ring(ctx);
+	mutex_unlock(&ctx->ring_lock);
+	if (err < 0)
 		goto err;
 
 	atomic_set(&ctx->reqs_available, ctx->nr_events - 1);
-- 
1.7.7

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ