[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <063D6719AE5E284EB5DD2968C1650D6D1CAC6593@AcuExch.aculab.com>
Date: Tue, 13 Jan 2015 09:49:19 +0000
From: David Laight <David.Laight@...LAB.COM>
To: 'Thomas Graf' <tgraf@...g.ch>,
"davem@...emloft.net" <davem@...emloft.net>,
Fengguang Wu <fengguang.wu@...el.com>
CC: LKP <lkp@...org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"netfilter-devel@...r.kernel.org" <netfilter-devel@...r.kernel.org>,
"coreteam@...filter.org" <coreteam@...filter.org>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>
Subject: RE: [PATCH net-next] rhashtable: Lower/upper bucket may map to same
lock while shrinking
From: Thomas Graf
> Each per bucket lock covers a configurable number of buckets. While
> shrinking, two buckets in the old table contain entries for a single
> bucket in the new table. We need to lock down both while linking.
> Check if they are protected by different locks to avoid a recursive
> lock.
Thought, could the shrunk table use the same locks as the lower half
of the old table?
I also wonder whether shrinking hash tables is ever actually worth
the effort. Most likely they'll need to grow again very quickly.
> spin_lock_bh(old_bucket_lock1);
> - spin_lock_bh_nested(old_bucket_lock2, RHT_LOCK_NESTED);
> - spin_lock_bh_nested(new_bucket_lock, RHT_LOCK_NESTED2);
> +
> + /* Depending on the lock per buckets mapping, the bucket in
> + * the lower and upper region may map to the same lock.
> + */
> + if (old_bucket_lock1 != old_bucket_lock2) {
> + spin_lock_bh_nested(old_bucket_lock2, RHT_LOCK_NESTED);
> + spin_lock_bh_nested(new_bucket_lock, RHT_LOCK_NESTED2);
> + } else {
> + spin_lock_bh_nested(new_bucket_lock, RHT_LOCK_NESTED);
> + }
Acquiring 3 locks of much the same type looks like a locking hierarchy
violation just waiting to happen.
David
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists