[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20080107225740.GA14758@one.firstfloor.org>
Date: Mon, 7 Jan 2008 23:57:41 +0100
From: Andi Kleen <andi@...stfloor.org>
To: Alan Cox <alan@...rguk.ukuu.org.uk>
Cc: Andi Kleen <andi@...stfloor.org>,
David Miller <davem@...emloft.net>, ebiederm@...ssion.com,
bcrl@...ck.org, linux-kernel@...r.kernel.org,
torvalds@...ux-foundation.org
Subject: Re: regression: sysctl_check changes in 2.6.24 are O(n) resulting in slow creation of 10000 network interfaces
On Mon, Jan 07, 2008 at 09:30:54PM +0000, Alan Cox wrote:
> > I think that would be a better option than to complicate sysctl.c
> > for this uncommon case.
>
> What is so complicated about hashing the entries if you are checking for
One thing I'm worrying about is memory bloat (yes I know that's not
popular but someone has to do it ;-)
You would need a hash table for each table. To handle 100k entries
you would need a larger hash tables with at least a few hundred entries.
And that for each subdirectory.
% find /proc/sys -type d | wc -l
64
Assuming e.g. a 128 byte entry hash table (which is probably too small
for 100k entries) that would require 64 * 128 * 8 = 64k of memory.
Not gigantic, but lots of small fry bloat also adds up. Now if you
chose an actually realistic hash table size it gets even bigger.
Most likely you would need to implement a tree or a resizeable hash table
to do this sanely and then you quickly go into complicated territory.
> duplicates when debugging. You can set the hash function to "0" and the
My understanding was that the code was always on; not only for debugging.
-Andi
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists