lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1329840530.31413.6.camel@oc1677441337.ibm.com>
Date:	Tue, 21 Feb 2012 08:08:50 -0800
From:	Sridhar Samudrala <sri@...ibm.com>
To:	Jason Wang <jasowang@...hat.com>
Cc:	Shirley Ma <mashirle@...ibm.com>,
	"Michael S. Tsirkin" <mst@...hat.com>,
	Anthony Liguori <aliguori@...ibm.com>, netdev@...r.kernel.org,
	Tom Lendacky <toml@...ibm.com>,
	Cristian Viana <vianac@...ibm.com>
Subject: Re: [PATCH 2/2] vhost-net: add a spin_threshold parameter

On Tue, 2012-02-21 at 14:38 +0800, Jason Wang wrote:
> On 02/21/2012 02:28 PM, Shirley Ma wrote:
> > On Tue, 2012-02-21 at 13:34 +0800, Jason Wang wrote:
> >> On 02/21/2012 09:35 AM, Shirley Ma wrote:
> >>> We tried similar approach before by using a minimum timer for
> >> handle_tx
> >>> to stay in the loop to accumulate more packets before enabling the
> >> guest
> >>> notification. It did have better TCP_RRs, UDP_RRs results. However,
> >> we
> >>> think this is just a debug patch. We really need to understand why
> >>> handle_tx can't see more packets to process for multiple instances
> >>> request/response type of workload first. Spinning in this loop is
> >> not a
> >>> good solution.
> >> Spinning help for the latency, but looks like we need some adaptive
> >> method to adjust the threshold dynamically such as monitor the
> >> minimum
> >> time gap between two packets. For throughput, if we can improve the
> >> batching of small packets we can improve it. I've tired to use event
> >> index to delay the tx kick until a specified number of packets were
> >> batched in the virtqueue. Test shows improvement of throughput in
> >> small
> >> packets as the number of #exit were reduced greatly ( the
> >> packets/#exit
> >> and cpu utilization were increased), but it damages the performance
> >> of
> >> other. This is only for debug, but it confirms that there's something
> >> we
> >> need to improve the batching.
> > Our test case was 60 instances 256/256 bytes tcp_rrs or udp_rrs. In
> > theory there should be multiple packets in the queue by the time vhost
> > gets notified, but from debugging output, there was only a few or even
> > one packet in the queue. So the questions here why the time gap between
> > two packets is that big?
> >
> > Shirley
> 
> Not sure whether it's related but did you try to disable the nagle 
> algorithm during the test?

This is a 60-instance 256 byte request-response test. So each request
for each instance is independent and is sub-mtu size and the next
request is not sent until the response is received. So Nagle doesn't
delay any packets in this workload.

Thanks
Sridhar

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ