[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20180528100325.748441454@linuxfoundation.org>
Date: Mon, 28 May 2018 11:58:43 +0200
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org, Peter Zijlstra <peterz@...radead.org>,
Al Viro <viro@...iv.linux.org.uk>,
Naresh Madhusudana <naresh.madhusudana@....com>,
Matthew Wilcox <mawilcox@...rosoft.com>,
Will Deacon <will.deacon@....com>,
Sasha Levin <alexander.levin@...rosoft.com>
Subject: [PATCH 4.14 138/496] fs: dcache: Avoid livelock between d_alloc_parallel and __d_add
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Will Deacon <will.deacon@....com>
[ Upstream commit 015555fd4d2930bc0c86952c46ad88b3392f66e4 ]
If d_alloc_parallel runs concurrently with __d_add, it is possible for
d_alloc_parallel to continuously retry whilst i_dir_seq has been
incremented to an odd value by __d_add:
CPU0:
__d_add
n = start_dir_add(dir);
cmpxchg(&dir->i_dir_seq, n, n + 1) == n
CPU1:
d_alloc_parallel
retry:
seq = smp_load_acquire(&parent->d_inode->i_dir_seq) & ~1;
hlist_bl_lock(b);
bit_spin_lock(0, (unsigned long *)b); // Always succeeds
CPU0:
__d_lookup_done(dentry)
hlist_bl_lock
bit_spin_lock(0, (unsigned long *)b); // Never succeeds
CPU1:
if (unlikely(parent->d_inode->i_dir_seq != seq)) {
hlist_bl_unlock(b);
goto retry;
}
Since the simple bit_spin_lock used to implement hlist_bl_lock does not
provide any fairness guarantees, then CPU1 can starve CPU0 of the lock
and prevent it from reaching end_dir_add(dir), therefore CPU1 cannot
exit its retry loop because the sequence number always has the bottom
bit set.
This patch resolves the livelock by not taking hlist_bl_lock in
d_alloc_parallel if the sequence counter is odd, since any subsequent
masked comparison with i_dir_seq will fail anyway.
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Al Viro <viro@...iv.linux.org.uk>
Reported-by: Naresh Madhusudana <naresh.madhusudana@....com>
Acked-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Reviewed-by: Matthew Wilcox <mawilcox@...rosoft.com>
Signed-off-by: Will Deacon <will.deacon@....com>
Signed-off-by: Al Viro <viro@...iv.linux.org.uk>
Signed-off-by: Sasha Levin <alexander.levin@...rosoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
fs/dcache.c | 8 +++++++-
1 file changed, 7 insertions(+), 1 deletion(-)
--- a/fs/dcache.c
+++ b/fs/dcache.c
@@ -2482,7 +2482,7 @@ struct dentry *d_alloc_parallel(struct d
retry:
rcu_read_lock();
- seq = smp_load_acquire(&parent->d_inode->i_dir_seq) & ~1;
+ seq = smp_load_acquire(&parent->d_inode->i_dir_seq);
r_seq = read_seqbegin(&rename_lock);
dentry = __d_lookup_rcu(parent, name, &d_seq);
if (unlikely(dentry)) {
@@ -2503,6 +2503,12 @@ retry:
rcu_read_unlock();
goto retry;
}
+
+ if (unlikely(seq & 1)) {
+ rcu_read_unlock();
+ goto retry;
+ }
+
hlist_bl_lock(b);
if (unlikely(parent->d_inode->i_dir_seq != seq)) {
hlist_bl_unlock(b);
Powered by blists - more mailing lists