[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <1427171051.25985.94.camel@edumazet-glaptop2.roam.corp.google.com>
Date: Mon, 23 Mar 2015 21:24:11 -0700
From: Eric Dumazet <eric.dumazet@...il.com>
To: Herbert Xu <herbert@...dor.apana.org.au>
Cc: David Laight <David.Laight@...LAB.COM>,
"David S. Miller" <davem@...emloft.net>,
Thomas Graf <tgraf@...g.ch>, Patrick McHardy <kaber@...sh.net>,
Josh Triplett <josh@...htriplett.org>,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>
Subject: Re: [v1 PATCH 9/10] rhashtable: Allow GFP_ATOMIC bucket table
allocation
On Tue, 2015-03-24 at 14:09 +1100, Herbert Xu wrote:
> On Mon, Mar 23, 2015 at 12:53:11PM +0000, David Laight wrote:
> >
> > Given the lock is only used for insert and delete, I'm also not at
> > all clear why you allocate 128 locks per cpu for very large tables.
> > With the locks in their own array I don't think there can be 'false
> > sharing', the worst than can happen is two cpus spinning on locks
> > in the same cache line.
>
> Personally I'm totally against Bucket locks. If you have a
> scalability problem you really need to solve them at a higher
> level, e.g., multiqueue transmission in networking. Bucket
> locks are simply kicking the can down the road, it'll come back
> to bite you sooner or later in terms of scalability.
Well, keep in mind a lock can be very big with LOCKDEP.
One lock per bucket is totally overkill.
A hash array of locks is a good compromise.
128 locks per cpu is also a good compromise, because one cache line can
hold 16 locks.
Number of locks has nothing to do with number of buckets,
unless you have unlimited memory and can afford one lock per bucket,
and don't care of memory thrashing when dumping whole hash table.
I thought this kind of solution was well understood among network
developers.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists