[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20190206090721.8001-1-johannes@sipsolutions.net>
Date: Wed, 6 Feb 2019 10:07:21 +0100
From: Johannes Berg <johannes@...solutions.net>
To: linux-wireless@...r.kernel.org, netdev@...r.kernel.org
Cc: Jouni Malinen <j@...fi>, Thomas Graf <tgraf@...g.ch>,
Herbert Xu <herbert@...dor.apana.org.au>,
Johannes Berg <johannes.berg@...el.com>
Subject: [PATCH v2] rhashtable: make walk safe from softirq context
From: Johannes Berg <johannes.berg@...el.com>
When an rhashtable walk is done from softirq context, we rightfully
get a lockdep complaint saying that we could get a softirq in the
middle of a rehash, and thus deadlock on &ht->lock. This happened
e.g. in mac80211 as it does a walk in softirq context.
Fix this by using spin_lock_bh() wherever we use the &ht->lock.
Initially, I thought it would be sufficient to do this only in the
rehash (rhashtable_rehash_table), but I changed my mind:
* the caller doesn't really need to disable softirqs across all
of the rhashtable_walk_* functions, only those parts that they
actually do within the lock need it
* maybe more importantly, it would still lead to massive lockdep
complaints - false positives, but hard to fix - because lockdep
wouldn't know about different ht->lock instances, and thus one
user of the code doing a walk w/o any locking (when it only ever
uses process context this is fine) vs. another user like in wifi
where we noticed this problem would still cause it to complain.
Cc: stable@...r.kernel.org
Reported-by: Jouni Malinen <j@...fi>
Signed-off-by: Johannes Berg <johannes.berg@...el.com>
---
lib/rhashtable.c | 20 ++++++++++----------
1 file changed, 10 insertions(+), 10 deletions(-)
diff --git a/lib/rhashtable.c b/lib/rhashtable.c
index 852ffa5160f1..30d14f8d9985 100644
--- a/lib/rhashtable.c
+++ b/lib/rhashtable.c
@@ -327,10 +327,10 @@ static int rhashtable_rehash_table(struct rhashtable *ht)
/* Publish the new table pointer. */
rcu_assign_pointer(ht->tbl, new_tbl);
- spin_lock(&ht->lock);
+ spin_lock_bh(&ht->lock);
list_for_each_entry(walker, &old_tbl->walkers, list)
walker->tbl = NULL;
- spin_unlock(&ht->lock);
+ spin_unlock_bh(&ht->lock);
/* Wait for readers. All new readers will see the new
* table, and thus no references to the old table will
@@ -670,11 +670,11 @@ void rhashtable_walk_enter(struct rhashtable *ht, struct rhashtable_iter *iter)
iter->skip = 0;
iter->end_of_table = 0;
- spin_lock(&ht->lock);
+ spin_lock_bh(&ht->lock);
iter->walker.tbl =
rcu_dereference_protected(ht->tbl, lockdep_is_held(&ht->lock));
list_add(&iter->walker.list, &iter->walker.tbl->walkers);
- spin_unlock(&ht->lock);
+ spin_unlock_bh(&ht->lock);
}
EXPORT_SYMBOL_GPL(rhashtable_walk_enter);
@@ -686,10 +686,10 @@ EXPORT_SYMBOL_GPL(rhashtable_walk_enter);
*/
void rhashtable_walk_exit(struct rhashtable_iter *iter)
{
- spin_lock(&iter->ht->lock);
+ spin_lock_bh(&iter->ht->lock);
if (iter->walker.tbl)
list_del(&iter->walker.list);
- spin_unlock(&iter->ht->lock);
+ spin_unlock_bh(&iter->ht->lock);
}
EXPORT_SYMBOL_GPL(rhashtable_walk_exit);
@@ -719,10 +719,10 @@ int rhashtable_walk_start_check(struct rhashtable_iter *iter)
rcu_read_lock();
- spin_lock(&ht->lock);
+ spin_lock_bh(&ht->lock);
if (iter->walker.tbl)
list_del(&iter->walker.list);
- spin_unlock(&ht->lock);
+ spin_unlock_bh(&ht->lock);
if (iter->end_of_table)
return 0;
@@ -938,12 +938,12 @@ void rhashtable_walk_stop(struct rhashtable_iter *iter)
ht = iter->ht;
- spin_lock(&ht->lock);
+ spin_lock_bh(&ht->lock);
if (tbl->rehash < tbl->size)
list_add(&iter->walker.list, &tbl->walkers);
else
iter->walker.tbl = NULL;
- spin_unlock(&ht->lock);
+ spin_unlock_bh(&ht->lock);
out:
rcu_read_unlock();
--
2.17.2
Powered by blists - more mailing lists