lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Wed, 27 Aug 2008 23:51:58 -0700 (PDT) From: David Miller <davem@...emloft.net> To: dada1@...mosbay.com Cc: shemminger@...tta.com, andi@...stfloor.org, davej@...hat.com, netdev@...r.kernel.org, j.w.r.degoede@....nl Subject: Re: cat /proc/net/tcp takes 0.5 seconds on x86_64 From: Eric Dumazet <dada1@...mosbay.com> Date: Thu, 28 Aug 2008 08:20:51 +0200 > But for route cache, it is probably doable since we added the > rt_genid thing in commit 29e75252da20f3ab9e132c68c9aed156b87beae6 > ([IPV4] route cache: Introduce rt_genid for smooth cache > invalidation) > > If we add a hash table for each "struct net" > (net->ipv4.rt_hash_table), we then could do something sensible when > an admin writes to /proc/sys/net/ipv4/route/hash_size or at > rt_check_expire() time, if hash table is found to be full... The synchronization and implementation is not a problem for the route cache, I implemented this eons ago. > 3) In rt_check_expire(), adds some metrics to trigger an expand of the > hash table in case we found too many entries in it. This is the problem and why I didn't just commit the patch I had back then. We could not define a reasonable way to trigger hash table growth. GC attempts to keep a resident set of entries in the cache, and these heuristics are guided by the table size itself. So if you grow the table too aggressively this never has a chance to work. You want to respond dynamically to traffic in a reasonable amount of time, but you don't want to get tricked by bursts of RCU effects. We never came up with an algorithm that addresses all of these issues. -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists