lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <574D039E.5020101@redhat.com>
Date:	Tue, 31 May 2016 11:23:10 +0800
From:	Jason Wang <jasowang@...hat.com>
To:	"Michael S. Tsirkin" <mst@...hat.com>
Cc:	kvm@...r.kernel.org, virtualization@...ts.linux-foundation.org,
	netdev@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH V2 2/2] vhost_net: conditionally enable tx polling



On 2016年05月30日 23:55, Michael S. Tsirkin wrote:
> On Mon, May 30, 2016 at 02:47:54AM -0400, Jason Wang wrote:
>> We always poll tx for socket, this is sub optimal since:
>>
>> - it will be only used when we exceed the sndbuf of the socket.
>> - since we use two independent polls for tx and vq, this will slightly
>>    increase the waitqueue traversing time and more important, vhost
>>    could not benefit from commit
>>    9e641bdcfa4ef4d6e2fbaa59c1be0ad5d1551fd5 ("net-tun: restructure
>>    tun_do_read for better sleep/wakeup efficiency") even if we've
>>    stopped rx polling during handle_rx since tx poll were still left in
>>    the waitqueue.
> Why is this an issue?
> sock_def_write_space only wakes up when queue is half empty,
> not on each packet.
>          if ((atomic_read(&sk->sk_wmem_alloc) << 1) <= sk->sk_sndbuf)
>
> I suspect the issue is with your previous patch,
> it now pokes at the spinlock on data path
> where it used not to.
>
> Is that right?

The problem is not tx wake up but still rx wake up. Patch 1 removes rx 
poll, but still left tx poll. So in sock_def_readable(), 
skwq_has_sleeper() returns true, we still need to traverse waitqueue and 
touch spinlocks. With this patch, unless a heavy tx load, tx poll were 
disabled, sock_def_readable() can return finish very soon.

>
>
>> Fix this by conditionally enable tx polling only when -EAGAIN were
>> met.
>>
>> Test shows about 8% improvement on guest rx pps.
>>
>> Before: ~1350000
>> After:  ~1460000
>>
>> Signed-off-by: Jason Wang <jasowang@...hat.com>
>> ---
>>   drivers/vhost/net.c | 3 +++
>>   1 file changed, 3 insertions(+)
>>
>> diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
>> index e91603b..5a05fa0 100644
>> --- a/drivers/vhost/net.c
>> +++ b/drivers/vhost/net.c
>> @@ -378,6 +378,7 @@ static void handle_tx(struct vhost_net *net)
>>   		goto out;
>>   
>>   	vhost_disable_notify(&net->dev, vq);
>> +	vhost_net_disable_vq(net, vq);
>>   
>>   	hdr_size = nvq->vhost_hlen;
>>   	zcopy = nvq->ubufs;
>> @@ -459,6 +460,8 @@ static void handle_tx(struct vhost_net *net)
>>   					% UIO_MAXIOV;
>>   			}
>>   			vhost_discard_vq_desc(vq, 1);
>> +			if (err == -EAGAIN)
>> +				vhost_net_enable_vq(net, vq);
>>   			break;
>>   		}
>>   		if (err != len)
>> -- 
>> 1.8.3.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ