[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <E1Hpdhs-00069a-00@gondolin.me.apana.org.au>
Date: Sun, 20 May 2007 15:11:48 +1000
From: Herbert Xu <herbert@...dor.apana.org.au>
To: davem@...emloft.net (David Miller)
Cc: djohnson+linux-kernel@...starentnetworks.com,
linux-kernel@...r.kernel.org, netdev@...r.kernel.org
Subject: Re: [PATCH] improved locking performance in rt_run_flush()
David Miller <davem@...emloft.net> wrote:
> From: Dave Johnson <djohnson+linux-kernel@...starentnetworks.com>
>>
>> The below patch changes rt_run_flush() to only take each spinlock
>> protecting the rt_hash_table once instead of taking a spinlock for
>> every hash table bucket (and ending up taking the same small set
>> of locks over and over).
...
> I'm not ignoring it I'm just trying to brainstorm whether there
> is a better way to resolve this inefficiency. :-)
The main problem I see with this is having to walk and free each
chain with the lock held. We could avoid this if we had a pointer
in struct rtable to chain them up for freeing later.
I just checked and struct rtable is 236 bytes long on 32-bit but
the slab cache pads it to 256 bytes so we've got some free space.
I suspect 64-bit should be similar.
Cheers,
--
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmV>HI~} <herbert@...dor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists