[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <PU1P153MB01696CC308B2F8ECE044A611BF050@PU1P153MB0169.APCP153.PROD.OUTLOOK.COM>
Date: Sun, 19 May 2019 19:30:07 +0000
From: Dexuan Cui <decui@...rosoft.com>
To: Sunil Muthuswamy <sunilmut@...rosoft.com>,
KY Srinivasan <kys@...rosoft.com>,
Haiyang Zhang <haiyangz@...rosoft.com>,
Stephen Hemminger <sthemmin@...rosoft.com>,
Sasha Levin <sashal@...nel.org>,
"David S. Miller" <davem@...emloft.net>,
Michael Kelley <mikelley@...rosoft.com>
CC: "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
"linux-hyperv@...r.kernel.org" <linux-hyperv@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: RE: [PATCH] hv_sock: perf: loop in send() to maximize bandwidth
> From: Sunil Muthuswamy <sunilmut@...rosoft.com>
> Sent: Thursday, May 16, 2019 7:05 PM
> Currently, the hv_sock send() iterates once over the buffer, puts data into
> the VMBUS channel and returns. It doesn't maximize on the case when there
> is a simultaneous reader draining data from the channel. In such a case,
> the send() can maximize the bandwidth (and consequently minimize the cpu
> cycles) by iterating until the channel is found to be full.
> ...
> Observation:
> 1. The avg throughput doesn't really change much with this change for this
> scenario. This is most probably because the bottleneck on throughput is
> somewhere else.
> 2. The average system (or kernel) cpu time goes down by 10%+ with this
> change, for the same amount of data transfer.
>
> Signed-off-by: Sunil Muthuswamy <sunilmut@...rosoft.com>
Reviewed-by: Dexuan Cui <decui@...rosoft.com>
The patch looks good. Thanks, Sunil!
Thanks,
-- Dexuan
Powered by blists - more mailing lists