[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <200710302142.05753.ak@suse.de>
Date: Tue, 30 Oct 2007 21:42:05 +0100
From: Andi Kleen <ak@...e.de>
To: David Miller <davem@...emloft.net>
Cc: jdelvare@...e.de, netdev@...r.kernel.org
Subject: Re: [PATCH] net: Saner thash_entries default with much memory
> Next, machines that service that many sockets typically have them
> mostly with full transmit queues talking to a very slow receiver at
> the other end.
Not sure -- there are likely use cases with lots of idle but connected
sockets.
Also the constraint here is not really how many sockets are served,
but how well the hash function manages to spread them in the table.. I don't
have good data on that.
But still (512 * 1024) sounds reasonable because e.g. in the lots
of idle socket case you're probably fine with the hash chains
having more than one entry worst case because a small working
set will fit in cache and as long as the chains do not end up
very long walking in cache of a short list will be still fast enough.
> So to me (512 * 1024) is a very reasonable limit and (with lockdep
> and spinlock debugging disabled) this makes the EHASH table consume
> 8MB on UP 64-bit and ~12MB on SMP 64-bit systems.
I still have my doubts it makes sense to have an own lock for each bucket. It
would be probably better to just divide the hash value through a factor
again and then use that to index a smaller lock only table.
-Andi
-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists