[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <152782824961.30340.16484931507108689498.stgit@noble>
Date: Fri, 01 Jun 2018 14:44:09 +1000
From: NeilBrown <neilb@...e.com>
To: Thomas Graf <tgraf@...g.ch>,
Herbert Xu <herbert@...dor.apana.org.au>
Cc: netdev@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: [PATCH 09/18] rhashtable: use cmpxchg() in nested_table_alloc()
nested_table_alloc() relies on the fact that there is
at most one spinlock allocated for every slot in the top
level nested table, so it is not possible for two threads
to try to allocate the same table at the same time.
A future patch will change the locking and invalidate
this assumption. So change the code to protect against
a race using cmpxchg() - if it loses, it frees the table
that it allocated.
Signed-off-by: NeilBrown <neilb@...e.com>
---
lib/rhashtable.c | 8 +++++---
1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/lib/rhashtable.c b/lib/rhashtable.c
index 1737fbd049da..86c801d04d4a 100644
--- a/lib/rhashtable.c
+++ b/lib/rhashtable.c
@@ -132,9 +132,11 @@ static union nested_table *nested_table_alloc(struct rhashtable *ht,
INIT_RHT_NULLS_HEAD(ntbl[i].bucket);
}
- rcu_assign_pointer(*prev, ntbl);
-
- return ntbl;
+ if (cmpxchg(prev, NULL, ntbl) == NULL)
+ return ntbl;
+ /* Raced with another thread. */
+ kfree(ntbl);
+ return rcu_dereference(*prev);
}
static struct bucket_table *nested_bucket_table_alloc(struct rhashtable *ht,
Powered by blists - more mailing lists