lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4EC38B07.6060906@redhat.com>
Date:	Wed, 16 Nov 2011 18:05:59 +0800
From:	jason wang <jasowang@...hat.com>
To:	Krishna Kumar2 <krkumar2@...ibm.com>
CC:	Asias He <asias.hejun@...il.com>, gorcunov@...il.com,
	kvm@...r.kernel.org, Sasha Levin <levinsasha928@...il.com>,
	mingo@...e.hu, "Michael S. Tsirkin" <mst@...hat.com>,
	netdev@...r.kernel.org, penberg@...nel.org,
	Rusty Russell <rusty@...tcorp.com.au>,
	virtualization@...ts.linux-foundation.org
Subject: Re: [RFC] kvm tools: Implement multiple VQ for virtio-net

On 11/16/2011 05:09 PM, Krishna Kumar2 wrote:
> jason wang <jasowang@...hat.com> wrote on 11/16/2011 11:40:45 AM:
>
> Hi Jason,
>
>> Have any thought in mind to solve the issue of flow handling?
> So far nothing concrete.
>
>> Maybe some performance numbers first is better, it would let us know
>> where we are. During the test of my patchset, I find big regression of
>> small packet transmission, and more retransmissions were noticed. This
>> maybe also the issue of flow affinity. One interesting things is to see
>> whether this happens in your patches :)
> I haven't got any results for small packet, but will run this week
> and send an update. I remember my earlier patches having regression
> for small packets.
>
>> I've played with a basic flow director implementation based on my series
>> which want to make sure the packets of a flow was handled by the same
>> vhost thread/guest vcpu. This is done by:
>>
>> - bind virtqueue to guest cpu
>> - record the hash to queue mapping when guest sending packets and use
>> this mapping to choose the virtqueue when forwarding packets to guest
>>
>> Test shows some help during for receiving packets from external host and
>> packet sending to local host. But it would hurt the performance of
>> sending packets to remote host. This is not the perfect solution as it
>> can not handle guest moving processes among vcpus, I plan to try
>> accelerate RFS and sharing the mapping between host and guest.
>>
>> Anyway this is just for receiving, the small packet sending need more
>> thoughts.
> I don't recollect small packet performance for guest->local host.
> Also, using multiple tuns devices on the bridge (instead of mq-tun)
> balances the rx/tx of a flow to a single vq. Then you can avoid
> mq-tun with it's queue selector function, etc. Have you tried it?

I remember it works when I test your patchset early this year, but don't
measure its performance. If multiple tuns devices were used, the mac
address table would be updated very frequently and packets can not be
forwarded in parallel ( unless we make bridge to support multiqueue ).

>
> I will run my tests this week and get back.
>
> thanks,
>
> - KK
>

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ