[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date: Mon, 06 Jun 2016 11:40:41 +0200
From: Mike Galbraith <umgwanakikbuti@...il.com>
To: Mel Gorman <mgorman@...e.de>
Cc: lkml <linux-kernel@...r.kernel.org>,
Thomas Gleixner <tglx@...utronix.de>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>
Subject: [patch] futex: Fix v4.6+ ltp futex_wait04 regression
Hi Mel (who is out of the office today),
I initially reported this on the rt-users list, thinking at the time
that it was only showing up in -rt kernels, but that turned out to not
be the case, it just requires an enterprise config for some reason mm
folks will likely recognize instantly. I just happen to use same when
building devel -rt trees to at least get some build warning coverage.
Below is a stab at fixing the thing up while you're off doing whatever
an Irishman does on a national holiday (hm;). If it's ok as is, fine,
I saved you a couple minutes. If not, oh well, consider it diagnosis.
65d8fc77 futex: Remove requirement for lock_page() in get_futex_key()
introduced a regression of futex_wait04 when an enterprise size config is
used. Per trace_printk(), when we assign page = compound_head(page), we're
subsequently using a different page than the one we locked into memory,
thus mucking up references.
Fixes: 65d8fc77 futex: Remove requirement for lock_page() in get_futex_key()
Signed-off-by: Mike Galbraith <umgwanakikbuit@...il.com>
---
kernel/futex.c | 26 ++++++++++++++++++++------
1 file changed, 20 insertions(+), 6 deletions(-)
--- a/kernel/futex.c
+++ b/kernel/futex.c
@@ -469,7 +469,7 @@ get_futex_key(u32 __user *uaddr, int fsh
{
unsigned long address = (unsigned long)uaddr;
struct mm_struct *mm = current->mm;
- struct page *page;
+ struct page *page, *pinned = NULL;
struct address_space *mapping;
int err, ro = 0;
@@ -530,8 +530,18 @@ get_futex_key(u32 __user *uaddr, int fsh
* considered here and page lock forces unnecessarily serialization
* From this point on, mapping will be re-verified if necessary and
* page lock will be acquired only if it is unavoidable
- */
+ *
+ * If we're dealing with a compound page, save our reference to the
+ * page we locked in memory above, and take a new reference on the
+ * page head, dropping the previously pinned page reference on retry.
+ */
+ if (unlikely(pinned && page != pinned))
+ put_page(pinned);
+ pinned = page;
page = compound_head(page);
+ if (unlikely(pinned != page))
+ get_page(page);
+
mapping = READ_ONCE(page->mapping);
/*
@@ -560,12 +570,14 @@ get_futex_key(u32 __user *uaddr, int fsh
lock_page(page);
shmem_swizzled = PageSwapCache(page) || page->mapping;
unlock_page(page);
- put_page(page);
- if (shmem_swizzled)
+ if (shmem_swizzled) {
+ put_page(page);
goto again;
+ }
- return -EFAULT;
+ err = -EFAULT;
+ goto out;
}
/*
@@ -654,12 +666,14 @@ get_futex_key(u32 __user *uaddr, int fsh
key->both.offset |= FUT_OFF_INODE; /* inode-based key */
key->shared.inode = inode;
- key->shared.pgoff = basepage_index(page);
+ key->shared.pgoff = basepage_index(pinned);
rcu_read_unlock();
}
out:
put_page(page);
+ if (unlikely(pinned != page))
+ put_page(pinned);
return err;
}
Powered by blists - more mailing lists