lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Mon, 26 Oct 2009 17:24:58 +0200 From: Denys Fedoryschenko <denys@...p.net.lb> To: Octavian Purdila <opurdila@...acom.com> Cc: Eric Dumazet <eric.dumazet@...il.com>, Benjamin LaHaise <bcrl@...et.ca>, Stephen Hemminger <shemminger@...tta.com>, Cosmin Ratiu <cratiu@...acom.com>, netdev@...r.kernel.org Subject: Re: [RFC] make per interface sysctl entries configurable I test it on pppoe with 1k customers. It works flawlessly. When there is problem on network and i have massive users disconnect and then login, the bottleneck is in lock somewhere in creation of sysctl(according perf). PPPoE after 200-300 interfaces will start dying, and connection rate will drop to 20-50 customers per minute, load average will jump to 70-100 (i guess pppd processes waiting their turn). With this patch i am able to sustain 200-300 customers / minute login rate and perftop is "clear" now. Definitely this option is optional, and doesn't cut any functionality by default, just giving more choice. And for PPP (pppoe/pptp) NAS it is very useful. On Sunday 25 October 2009 19:54:49 Octavian Purdila wrote: > RFC patches are attached. > > Another possible approach: add an interface flag and use it to decide > whether we want per interface sysctl entries or not. > > Benchmarks for creating 1000 interface (with the ndst module previously > posted on the list, ppc750 @800Mhz machine): > > - without the patches: > > real 4m 38.27s > user 0m 0.00s > sys 2m 18.90s > > - with the patches: > > real 0m 0.10s > user 0m 0.00s > sys 0m 0.05s > > Thanks, > tavi -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists