lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-ID: <545BA513.2070801@hp.com> Date: Thu, 06 Nov 2014 08:42:59 -0800 From: Rick Jones <rick.jones2@...com> To: Eric Dumazet <eric.dumazet@...il.com> CC: David Miller <davem@...emloft.net>, netdev <netdev@...r.kernel.org>, Or Gerlitz <ogerlitz@...lanox.com>, Willem de Bruijn <willemb@...gle.com> Subject: Re: [PATCH net-next] net: gro: add a per device gro flush timer On 11/05/2014 06:39 PM, Eric Dumazet wrote: > On Wed, 2014-11-05 at 18:14 -0800, Eric Dumazet wrote: >> On Wed, 2014-11-05 at 17:38 -0800, Rick Jones wrote: >> >>> Speaking of QPS, what happens to 200 TCP_RR tests when the feature is >>> enabled? > > The possible reduction of QPS happens when you have a single flow like > TCP_RR -- -r 40000,40000 > > (Because we have one single TCP packet with 40000 bytes of payload, > application is waked up once when Push flag is received) > > So cpu effiency is way better, but application has to copy 40000 bytes > in one go _after_ Push flag, instead of being able to copy part of the > data _before_ receiving the Push flag. Thanks. That isn't too unlike what I've seen happen in the past with say an 8K request size and switching back and forth between a 1500 and 9000 byte MTU. happy benchmarking, rick -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists