lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4ECF06E1.8000206@redhat.com>
Date:	Fri, 25 Nov 2011 11:09:21 +0800
From:	Jason Wang <jasowang@...hat.com>
To:	"Michael S. Tsirkin" <mst@...hat.com>
CC:	Krishna Kumar <krkumar2@...ibm.com>, arnd@...db.de,
	netdev@...r.kernel.org, virtualization@...ts.linux-foundation.org,
	levinsasha928@...il.com, davem@...emloft.net
Subject: Re: [PATCH] macvtap: Fix macvtap_get_queue to use rxhash first

On 11/25/2011 12:14 AM, Michael S. Tsirkin wrote:
> On Thu, Nov 24, 2011 at 08:56:45PM +0800, jasowang wrote:
>> >  On 11/24/2011 06:34 PM, Michael S. Tsirkin wrote:
>>> >  >On Thu, Nov 24, 2011 at 06:13:41PM +0800, jasowang wrote:
>>>> >  >>On 11/24/2011 05:59 PM, Michael S. Tsirkin wrote:
>>>>> >  >>>On Thu, Nov 24, 2011 at 01:47:14PM +0530, Krishna Kumar wrote:
>>>>>> >  >>>>It was reported that the macvtap device selects a
>>>>>> >  >>>>different vhost (when used with multiqueue feature)
>>>>>> >  >>>>for incoming packets of a single connection. Use
>>>>>> >  >>>>packet hash first. Patch tested on MQ virtio_net.
>>>>> >  >>>So this is sure to address the problem, why exactly does this happen?
>>>> >  >>Ixgbe has flow director and bind queue to host cpu, so it can make
>>>> >  >>sure the packet of a flow to be handled by the same queue/cpu. So
>>>> >  >>when vhost thread moves from one host cpu to another, ixgbe would
>>>> >  >>therefore send the packet to the new cpu/queue.
>>> >  >Confused. How does ixgbe know about vhost thread moving?
>> >  
>> >  As far as I can see, ixgbe binds queues to physical cpu, so let consider:
>> >  
>> >  vhost thread transmits packets of flow A on processor M
>> >  during packet transmission, ixgbe driver programs the card to
>> >  deliver the packet of flow A to queue/cpu M through flow director
>> >  (see ixgbe_atr())
>> >  vhost thread then receives packet of flow A with from M
>> >  ...
>> >  vhost thread transmits packets of flow A on processor N
>> >  ixgbe driver programs the flow director to change the delivery of
>> >  flow A to queue N ( cpu N )
>> >  vhost thread then receives packet of flow A with from N
>> >  ...
>> >  
>> >  So, for a single flow A, we may get different queue mappings. Using
>> >  rxhash instead may solve this issue.
> Or better, transmit a single flow from a single vhost thread.

It has already worked this way, as the tx queue were choose based on tx 
hash in guest(),  but vhost thread can move among processors.

>
> If packets of a single flow get spread over different CPUs,
> they will get reordered and things are not going to work well.
>

The problem is that vhost does not handle TCP itself but ixgbe driver 
would think it does, so the nic would deliver packets of a single flow 
to different CPUs when the vhost thread who does the transmission moves.

So, in conclusion, if we do not consider features of  under layer nic, 
using rxhash instead of queue mappings to identify a flow is better.

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ