lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-Id: <20190522.180058.887469871482412864.davem@davemloft.net> Date: Wed, 22 May 2019 18:00:58 -0700 (PDT) From: David Miller <davem@...emloft.net> To: sunilmut@...rosoft.com Cc: kys@...rosoft.com, haiyangz@...rosoft.com, sthemmin@...rosoft.com, sashal@...nel.org, mikelley@...rosoft.com, netdev@...r.kernel.org, linux-hyperv@...r.kernel.org, linux-kernel@...r.kernel.org Subject: Re: [PATCH net-next] hv_sock: perf: loop in send() to maximize bandwidth From: Sunil Muthuswamy <sunilmut@...rosoft.com> Date: Wed, 22 May 2019 23:10:44 +0000 > Currently, the hv_sock send() iterates once over the buffer, puts data into > the VMBUS channel and returns. It doesn't maximize on the case when there > is a simultaneous reader draining data from the channel. In such a case, > the send() can maximize the bandwidth (and consequently minimize the cpu > cycles) by iterating until the channel is found to be full. > > Perf data: > Total Data Transfer: 10GB/iteration > Single threaded reader/writer, Linux hvsocket writer with Windows hvsocket > reader > Packet size: 64KB > CPU sys time was captured using the 'time' command for the writer to send > 10GB of data. > 'Send Buffer Loop' is with the patch applied. > The values below are over 10 iterations. ... > Observation: > 1. The avg throughput doesn't really change much with this change for this > scenario. This is most probably because the bottleneck on throughput is > somewhere else. > 2. The average system (or kernel) cpu time goes down by 10%+ with this > change, for the same amount of data transfer. > > Signed-off-by: Sunil Muthuswamy <sunilmut@...rosoft.com> Applied.
Powered by blists - more mailing lists