lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090914154011.GB3556@redhat.com>
Date:	Mon, 14 Sep 2009 18:40:11 +0300
From:	"Michael S. Tsirkin" <mst@...hat.com>
To:	Or Gerlitz <ogerlitz@...taire.com>
Cc:	David Miller <davem@...emloft.net>, netdev@...r.kernel.org,
	herbert@...dor.apana.org.au
Subject: Re: [PATCH RFC] tun: export underlying socket

On Mon, Sep 14, 2009 at 05:06:52PM +0300, Or Gerlitz wrote:
> Michael S. Tsirkin wrote:
>>> how  would the use case with vhost will look like?
>> - Configure bridge and tun using existing scripts
>> - pass tun fd to vhost via an ioctl
>> - vhost calls tun_get_socket
>> - from this point, guest networking just goes faster
>
> let me see I am with you:
>
> 1. vhost gets from user space through ioctl packet socket fd OR tun fd -  
> but never both

Right

> 2. for packet socket fd
> VM.TX is translated by vhost to sendmsg which goes through the NIC
> NIC RX  makes the fd poll to signal and then recvmsg is called on the  
> fd, then vhost places the packet in a virtq
>
> 3. for tun fd
> VM.TX is translated by vhost to sendmsg which is translated by tun to  
> netif_rx which is then handled by the bridge
> NIC RX  goes to the bridge which xmits the packet a tun interface, now  
> what makes tun provide this packet to vhost and how it is done?

Same as above. vhost polls tun and calls recvmsg on the socket.

>
>> A lot of people have asked for tun support in vhost, because qemu
>> currently uses tun.  With this scheme existing code and scripts can
>> be used to configure both tun and bridge.  You also can utilize
>> virtualization-specific features in tun.

( broken too-long lines up. please do not merge them. )

> Tun has code to support some virtualization-specific features, however,  
> it has also some inherent problems, I think, for example, you don't know  
> over which NIC eventually a packet will be sent and as such, the feature  
> advertising to the guest (virtio-net) NIC is problematic,
> for example,  
> TSO. With vhost, since you are directly attached to a NIC and assuming  
> its a PF or VF NIC and not something like macvlan/veth you can actually  
> know what features are supported by this NIC.
>
> Or.

Herbert addressed the TSO example.

Generally, feature negotiation does become more complicated in bridged
configurations, but some users require bridging. So with vhost, feature
negotiation is mostly done in userspace (e.g. vhost does not expose a
TSO cpability, devices do this already); vhost itself only cares about
virtio features such as mergeable buffers.
Policy decisions, including whether to use packet socket or
tun+bridge, are up to the user.

-- 
MST
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ