[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20071103.162337.83099185.davem@davemloft.net>
Date: Sat, 03 Nov 2007 16:23:37 -0700 (PDT)
From: David Miller <davem@...emloft.net>
To: ak@...e.de
Cc: dada1@...mosbay.com, netdev@...r.kernel.org, acme@...hat.com
Subject: Re: [PATCH] INET : removes per bucket rwlock in tcp/dccp ehash
table
From: Andi Kleen <ak@...e.de>
Date: Sun, 4 Nov 2007 00:18:14 +0100
> On Thursday 01 November 2007 11:16:20 Eric Dumazet wrote:
>
> Some quick comments:
>
> > +#if defined(CONFIG_SMP) || defined(CONFIG_PROVE_LOCKING)
> > +/*
> > + * Instead of using one rwlock for each inet_ehash_bucket, we use a table of locks
> > + * The size of this table is a power of two and depends on the number of CPUS.
> > + */
>
> This shouldn't be hard coded based on NR_CPUS, but be done on runtime
> based on num_possible_cpus(). This is better for kernels with a large
> NR_CPUS, but which typically run on much smaller systems (like
> distribution kernels)
I think this is a good idea. Eric, could you make this change?
> Also the EHASH_LOCK_SZ == 0 special case is a little strange. Why did
> you add that?
He explained this in another reply, because ifdefs are ugly.
> And as a unrelated node have you tried converting the rwlocks
> into normal spinlocks? spinlocks should be somewhat cheaper
> because they have less cache protocol overhead and with
> the huge thash tables in Linux the chain walks should be short
> anyways so not doing this in parallel is probably not a big issue.
> At some point I also had a crazy idea of using a special locking
> scheme that special cases the common case that a hash chain
> has only one member and doesn't take a look for that at all.
I agree.
There was movement at one point to get rid of all rwlock's in the
kernel, I personally think they are pointless. Any use that makes
"sense" is a case where the code should be rewritten to decrease the
lock hold time or convert to RCU.
-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists