lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <88D661ADF6AFBF42B2AB88D8E7682B0901FC412A@EXMBX-SZMAIL011.tencent.com>
Date:   Wed, 28 Mar 2018 04:01:34 +0000
From:   haibinzhang(张海斌) <haibinzhang@...cent.com>
To:     Jason Wang <jasowang@...hat.com>,
        "mst@...hat.com" <mst@...hat.com>,
        "kvm@...r.kernel.org" <kvm@...r.kernel.org>,
        "virtualization@...ts.linux-foundation.org" 
        <virtualization@...ts.linux-foundation.org>,
        "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
CC:     lidongchen(陈立东) <lidongchen@...cent.com>,
        yunfangtai(台运方) <yunfangtai@...cent.com>
Subject: Re: [PATCH] vhost-net: add time limitation for tx polling(Internet
 mail)

On 2018年03月27日 19:26, Jason wrote
On 2018年03月27日 17:12, haibinzhang wrote:
>> handle_tx() will delay rx for a long time when busy tx polling udp packets
>> with short length(ie: 1byte udp payload), because setting VHOST_NET_WEIGHT
>> takes into account only sent-bytes but no time.
>
>Interesting.
>
>Looking at vhost_can_busy_poll() it tries to poke pending vhost work and 
>exit the busy loop if it found one. So I believe something block the 
>work queuing. E.g did reverting 8241a1e466cd56e6c10472cac9c1ad4e54bc65db 
>fix the issue?

"busy tx polling" means using netperf send udp packets with 1 bytes payload(total 47bytes frame lenght), 
and handle_tx() will be busy sending packets continuously.

>
>>   It's not fair for handle_rx(),
>> so needs to limit max time of tx polling.
>>
>> ---
>>   drivers/vhost/net.c | 3 ++-
>>   1 file changed, 2 insertions(+), 1 deletion(-)
>>
>> diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
>> index 8139bc70ad7d..dc9218a3a75b 100644
>> --- a/drivers/vhost/net.c
>> +++ b/drivers/vhost/net.c
>> @@ -473,6 +473,7 @@ static void handle_tx(struct vhost_net *net)
>>   	struct socket *sock;
>>   	struct vhost_net_ubuf_ref *uninitialized_var(ubufs);
>>   	bool zcopy, zcopy_used;
>> +	unsigned long start = jiffies;
>
>Checking jiffies is tricky, need to convert it to ms or whatever others.
>
>>   
>>   	mutex_lock(&vq->mutex);
>>   	sock = vq->private_data;
>> @@ -580,7 +581,7 @@ static void handle_tx(struct vhost_net *net)
>>   		else
>>   			vhost_zerocopy_signal_used(net, vq);
>>   		vhost_net_tx_packet(net);
>> -		if (unlikely(total_len >= VHOST_NET_WEIGHT)) {
>> +		if (unlikely(total_len >= VHOST_NET_WEIGHT) || unlikely(jiffies - start >= 1)) {
>
>How value 1 is determined here? And we need a complete test to make sure 
>this won't affect other use cases.

We just want <1ms ping latency, but actually we are not sure what value is reasonable.
We have some test results using netperf before this patch as follow,

    Udp payload    1byte    100bytes    1000bytes    1400bytes
  Ping avg latency    25ms     10ms       2ms         1.5ms

What is other testcases?

>
>Another thought is introduce another limit of #packets, but this need 
>benchmark too.
>
>Thanks
>
>>   			vhost_poll_queue(&vq->poll);
>>   			break;
>>   		}
>
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ