lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Fri, 16 Jul 2010 18:41:42 +0100 From: Ed W <lists@...dgooses.com> To: Patrick McManus <mcmanus@...ksong.com> CC: "H.K. Jerry Chu" <hkjerry.chu@...il.com>, David Miller <davem@...emloft.net>, davidsen@....com, linux-kernel@...r.kernel.org, netdev@...r.kernel.org Subject: Re: Raise initial congestion window size / speedup slow start? > and while I'm asking for info, can you expand on the conclusion > regarding poor cache hit rates for reusing learned cwnds? (ok, I admit I > only read the slides.. maybe the paper has more info?) > My guess is that this result is specific to google and their servers? I guess we can probably stereotype the world into two pools of devices: 1) Devices in a pool of fast networking, but connected to the rest of the world through a relatively slow router 2) Devices connected via a high speed network and largely the bottleneck device is many hops down the line and well away from us I'm thinking here 1) client users behind broadband routers, wireless, 3G, dialup, etc and 2) public servers that have obviously been deliberately placed in locations with high levels of interconnectivity. I think history information could be more useful for clients in category 1) because there is a much higher probability that their most restrictive device is one hop away and hence affects all connections and relatively occasionally the bottleneck is multiple hops away. For devices in category 2) it's much harder because the restriction will usually be lots of hops away and effectively you are trying to figure out and cache the speed of every ADSL router out there... For sure you can probably figure out how to cluster this stuff and say that pool there is 56K dialup, that pool there is "broadband", that pool is cell phone, etc, but probably it's hard to do better than that? So my guess is this is why google have had poor results investigating cwnd caching? However, I would suggest that whilst it's of little value for the server side, it still remains a very interesting idea for the client side and the cache hit ratio would seem to be dramatically higher here? I haven't studied the code, but given there is a userspace ability to change init cwnd through the IP utility, it would seem likely that relatively little coding would now be required to implement some kind of limited cwnd caching and experiment with whether this is a valuable addition? I would have thought if you are only fiddling with devices behind a broadband router then there is little chance of you "crashing the internet" with these kind of experiments? Good luck Ed W -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists