lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJSP0QUGWCf5WHEeCXzqZeF2CvpycxrGo-uPSfpWD1rWD3zeSg@mail.gmail.com>
Date:	Wed, 7 Dec 2011 09:08:57 +0000
From:	Stefan Hajnoczi <stefanha@...il.com>
To:	Jason Wang <jasowang@...hat.com>
Cc:	krkumar2@...ibm.com, kvm@...r.kernel.org, mst@...hat.com,
	netdev@...r.kernel.org, rusty@...tcorp.com.au,
	virtualization@...ts.linux-foundation.org, levinsasha928@...il.com,
	bhutchings@...arflare.com
Subject: Re: [net-next RFC PATCH 5/5] virtio-net: flow director support

On Wed, Dec 7, 2011 at 3:03 AM, Jason Wang <jasowang@...hat.com> wrote:
> On 12/06/2011 09:15 PM, Stefan Hajnoczi wrote:
>>
>> On Tue, Dec 6, 2011 at 10:21 AM, Jason Wang<jasowang@...hat.com>  wrote:
>>>
>>> On 12/06/2011 05:18 PM, Stefan Hajnoczi wrote:
>>>>
>>>> On Tue, Dec 6, 2011 at 6:33 AM, Jason Wang<jasowang@...hat.com>
>>>>  wrote:
>>>>>
>>>>> On 12/05/2011 06:55 PM, Stefan Hajnoczi wrote:
>>>>>>
>>>>>> On Mon, Dec 5, 2011 at 8:59 AM, Jason Wang<jasowang@...hat.com>
>>>>>>  wrote:
>>>>
>>>> The vcpus are just threads and may not be bound to physical CPUs, so
>>>> what is the big picture here?  Is the guest even in the position to
>>>> set the best queue mappings today?
>>>
>>>
>>> Not sure it could publish the best mapping but the idea is to make sure
>>> the
>>> packets of a flow were handled by the same guest vcpu and may be the same
>>> vhost thread in order to eliminate the packet reordering and lock
>>> contention. But this assumption does not take the bouncing of vhost or
>>> vcpu
>>> threads which would also affect the result.
>>
>> Okay, this is why I'd like to know what the big picture here is.  What
>> solution are you proposing?  How are we going to have everything from
>> guest application, guest kernel, host threads, and host NIC driver
>> play along so we get the right steering up the entire stack.  I think
>> there needs to be an answer to that before changing virtio-net to add
>> any steering mechanism.
>
>
> Consider the complexity of the host nic each with their own steering
> features,  this series make the first step with minimal effort to try to let
> guest driver and host tap/macvtap co-operate like what physical nic does.
> There may be other method, but performance numbers is also needed to give
> the answer.

I agree that performance results for this need to be shown.

My original point is really that it's not a good idea to take
individual steps without a good big picture because this will change
the virtio-net device specification.  If this turns out to be a dead
end then hosts will need to continue to support the interface forever
(legacy guests could still try to use it).  So please first explain
what the full stack picture is going to look like and how you think it
will lead to better performance.  You don't need to have all the code
or evidence, but just enough explanation so we see where this is all
going.

Stefan
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ