lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 10 Mar 2014 13:15:48 +0800
From:	Jason Wang <>
To:	David Miller <>
Subject: Re: [PATCH net V2] vhost: net: switch to use data copy if pending
 DMAs exceed the limit

On 03/08/2014 05:39 AM, David Miller wrote:
> From: Jason Wang <>
> Date: Fri,  7 Mar 2014 13:28:27 +0800
>> This is because the delay added by htb may lead the delay the finish
>> of DMAs and cause the pending DMAs for tap0 exceeds the limit
>> (VHOST_MAX_PEND). In this case vhost stop handling tx request until
>> htb send some packets. The problem here is all of the packets
>> transmission were blocked even if it does not go to VM2.
> Isn't this essentially head of line blocking?

Yes it is.
>> We can solve this issue by relaxing it a little bit: switching to use
>> data copy instead of stopping tx when the number of pending DMAs
>> exceed half of the vq size. This is safe because:
>> - The number of pending DMAs were still limited (half of the vq size)
>> - The out of order completion during mode switch can make sure that
>>   most of the tx buffers were freed in time in guest.
>> So even if about 50% packets were delayed in zero-copy case, vhost
>> could continue to do the transmission through data copy in this case.
>> Test result:
>> Before this patch:
>> VM1 to VM2 throughput is 9.3Mbit/s
>> VM1 to External throughput is 40Mbit/s
>> CPU utilization is 7%
>> After this patch:
>> VM1 to VM2 throughput is 9.3Mbit/s
>> Vm1 to External throughput is 93Mbit/s
>> CPU utilization is 16%
>> Completed performance test on 40gbe shows no obvious changes in both
>> throughput and cpu utilization with this patch.
>> The patch only solve this issue when unlimited sndbuf. We still need a
>> solution for limited sndbuf.
>> Cc: Michael S. Tsirkin <>
>> Cc: Qin Chuanyu <>
>> Signed-off-by: Jason Wang <>
> I'd like some vhost experts reviewing this before I apply it.

To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to
More majordomo info at
Please read the FAQ at

Powered by blists - more mailing lists