[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <153086175005.24852.16109495489833855391.stgit@noble>
Date: Fri, 06 Jul 2018 17:22:30 +1000
From: NeilBrown <neilb@...e.com>
To: Thomas Graf <tgraf@...g.ch>,
Herbert Xu <herbert@...dor.apana.org.au>
Cc: netdev@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: [PATCH 1/5] rhashtable: use cmpxchg() in nested_table_alloc()
nested_table_alloc() relies on the fact that there is
at most one spinlock allocated for every slot in the top
level nested table, so it is not possible for two threads
to try to allocate the same table at the same time.
A future patch will change the locking and invalidate
this assumption. So change the code to protect against
a race using cmpxchg() - if it loses, it frees the table
that it allocated.
Signed-off-by: NeilBrown <neilb@...e.com>
---
lib/rhashtable.c | 8 +++++---
1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/lib/rhashtable.c b/lib/rhashtable.c
index 9ddb7134285e..c6eaeaff16d1 100644
--- a/lib/rhashtable.c
+++ b/lib/rhashtable.c
@@ -132,9 +132,11 @@ static union nested_table *nested_table_alloc(struct rhashtable *ht,
INIT_RHT_NULLS_HEAD(ntbl[i].bucket);
}
- rcu_assign_pointer(*prev, ntbl);
-
- return ntbl;
+ if (cmpxchg(prev, NULL, ntbl) == NULL)
+ return ntbl;
+ /* Raced with another thread. */
+ kfree(ntbl);
+ return rcu_dereference(*prev);
}
static struct bucket_table *nested_bucket_table_alloc(struct rhashtable *ht,
Powered by blists - more mailing lists