[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20160120215932.651016094@linuxfoundation.org>
Date: Wed, 20 Jan 2016 15:10:46 -0800
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org, Colin Ian King <colin.king@...onical.com>,
Herbert Xu <herbert@...dor.apana.org.au>,
"David S. Miller" <davem@...emloft.net>
Subject: [PATCH 4.1 39/43] rhashtable: Fix walker list corruption
4.1-stable review patch. If anyone has any objections, please let me know.
------------------
From: Herbert Xu <herbert@...dor.apana.org.au>
[ Upstream commit c6ff5268293ef98e48a99597e765ffc417e39fa5 ]
The commit ba7c95ea3870fe7b847466d39a049ab6f156aa2c ("rhashtable:
Fix sleeping inside RCU critical section in walk_stop") introduced
a new spinlock for the walker list. However, it did not convert
all existing users of the list over to the new spin lock. Some
continued to use the old mutext for this purpose. This obviously
led to corruption of the list.
The fix is to use the spin lock everywhere where we touch the list.
This also allows us to do rcu_rad_lock before we take the lock in
rhashtable_walk_start. With the old mutex this would've deadlocked
but it's safe with the new spin lock.
Fixes: ba7c95ea3870 ("rhashtable: Fix sleeping inside RCU...")
Reported-by: Colin Ian King <colin.king@...onical.com>
Signed-off-by: Herbert Xu <herbert@...dor.apana.org.au>
Signed-off-by: David S. Miller <davem@...emloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
lib/rhashtable.c | 19 +++++++++----------
1 file changed, 9 insertions(+), 10 deletions(-)
--- a/lib/rhashtable.c
+++ b/lib/rhashtable.c
@@ -506,10 +506,11 @@ int rhashtable_walk_init(struct rhashtab
if (!iter->walker)
return -ENOMEM;
- mutex_lock(&ht->mutex);
- iter->walker->tbl = rht_dereference(ht->tbl, ht);
+ spin_lock(&ht->lock);
+ iter->walker->tbl =
+ rcu_dereference_protected(ht->tbl, lockdep_is_held(&ht->lock));
list_add(&iter->walker->list, &iter->walker->tbl->walkers);
- mutex_unlock(&ht->mutex);
+ spin_unlock(&ht->lock);
return 0;
}
@@ -523,10 +524,10 @@ EXPORT_SYMBOL_GPL(rhashtable_walk_init);
*/
void rhashtable_walk_exit(struct rhashtable_iter *iter)
{
- mutex_lock(&iter->ht->mutex);
+ spin_lock(&iter->ht->lock);
if (iter->walker->tbl)
list_del(&iter->walker->list);
- mutex_unlock(&iter->ht->mutex);
+ spin_unlock(&iter->ht->lock);
kfree(iter->walker);
}
EXPORT_SYMBOL_GPL(rhashtable_walk_exit);
@@ -550,14 +551,12 @@ int rhashtable_walk_start(struct rhashta
{
struct rhashtable *ht = iter->ht;
- mutex_lock(&ht->mutex);
+ rcu_read_lock();
+ spin_lock(&ht->lock);
if (iter->walker->tbl)
list_del(&iter->walker->list);
-
- rcu_read_lock();
-
- mutex_unlock(&ht->mutex);
+ spin_unlock(&ht->lock);
if (!iter->walker->tbl) {
iter->walker->tbl = rht_dereference_rcu(ht->tbl, ht);
Powered by blists - more mailing lists