lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 10 Mar 2014 13:15:48 +0800
From:	Jason Wang <jasowang@...hat.com>
To:	David Miller <davem@...emloft.net>
CC:	mst@...hat.com, kvm@...r.kernel.org,
	virtio-dev@...ts.oasis-open.org,
	virtualization@...ts.linux-foundation.org, netdev@...r.kernel.org,
	linux-kernel@...r.kernel.org, qinchuanyu@...wei.com
Subject: Re: [PATCH net V2] vhost: net: switch to use data copy if pending
 DMAs exceed the limit

On 03/08/2014 05:39 AM, David Miller wrote:
> From: Jason Wang <jasowang@...hat.com>
> Date: Fri,  7 Mar 2014 13:28:27 +0800
>
>> This is because the delay added by htb may lead the delay the finish
>> of DMAs and cause the pending DMAs for tap0 exceeds the limit
>> (VHOST_MAX_PEND). In this case vhost stop handling tx request until
>> htb send some packets. The problem here is all of the packets
>> transmission were blocked even if it does not go to VM2.
> Isn't this essentially head of line blocking?

Yes it is.
>> We can solve this issue by relaxing it a little bit: switching to use
>> data copy instead of stopping tx when the number of pending DMAs
>> exceed half of the vq size. This is safe because:
>>
>> - The number of pending DMAs were still limited (half of the vq size)
>> - The out of order completion during mode switch can make sure that
>>   most of the tx buffers were freed in time in guest.
>>
>> So even if about 50% packets were delayed in zero-copy case, vhost
>> could continue to do the transmission through data copy in this case.
>>
>> Test result:
>>
>> Before this patch:
>> VM1 to VM2 throughput is 9.3Mbit/s
>> VM1 to External throughput is 40Mbit/s
>> CPU utilization is 7%
>>
>> After this patch:
>> VM1 to VM2 throughput is 9.3Mbit/s
>> Vm1 to External throughput is 93Mbit/s
>> CPU utilization is 16%
>>
>> Completed performance test on 40gbe shows no obvious changes in both
>> throughput and cpu utilization with this patch.
>>
>> The patch only solve this issue when unlimited sndbuf. We still need a
>> solution for limited sndbuf.
>>
>> Cc: Michael S. Tsirkin <mst@...hat.com>
>> Cc: Qin Chuanyu <qinchuanyu@...wei.com>
>> Signed-off-by: Jason Wang <jasowang@...hat.com>
> I'd like some vhost experts reviewing this before I apply it.

Sure.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists