lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-Id: <20070905235954.a2b4e3d2.billfink@mindspring.com> Date: Wed, 5 Sep 2007 23:59:54 -0400 From: Bill Fink <billfink@...dspring.com> To: jdb@...x.dk Cc: Jesper Dangaard Brouer <hawk@...u.dk>, "netdev@...r.kernel.org" <netdev@...r.kernel.org> Subject: Re: [PATCH 1/2]: [NET_SCHED]: Make all rate based scheduler work with TSO. On Wed, 05 Sep 2007, Jesper Dangaard Brouer wrote: > On Tue, 2007-09-04 at 13:40 -0400, Bill Fink wrote: > > On Tue, 04 Sep 2007, Patrick McHardy wrote: > > > > > Bill Fink wrote: > > > > On Sat, 1 Sep 2007, Jesper Dangaard Brouer wrote: > > > > > > > Yes, you need to specify the MTU on the command line for > > > jumbo frames. > > > > Thanks! Works much better now, although it does slightly exceed > > the specified rate. > > Thats what happens, with the current rate table system, as we use the > lower boundry (when doing the packet to time lookups). Especially with a > high MTU, as the "resolution" of the rate table diminish (mpu=9000 gives > cell_log=6, 2^6=64 bytes "resolution" buckets). > > > [root@...g4 ~]# tc qdisc add dev eth2 root tbf rate 2gbit buffer 5000000 limit 18000 mtu 9000 > > > > [root@...g4 ~]# ./nuttcp-5.5.5 -w10m 192.168.88.14 > > 2465.6729 MB / 10.08 sec = 2051.8241 Mbps 19 %TX 13 %RX That doesn't seem to account for the magnitude of the rate exceeding. In the worst case (rough calculation): (1+64/9000)*2000 = 2014.2222 Mbps Now if that were 256 rather than 64: (1+256/9000)*2000 = 2056.8888 Mbps Or maybe the packet overhead is calculated wrong for the 9000 MTU case (just wild speculation on my part). -Bill - To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists