lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 28 Nov 2016 15:24:36 +0000
From:   "Jorgen S. Hansen" <jhansen@...are.com>
To:     Stefan Hajnoczi <stefanha@...hat.com>
CC:     "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
        "imbrenda@...ux.vnet.ibm.com" <imbrenda@...ux.vnet.ibm.com>
Subject: Re: AF_VSOCK network namespace support

Hi Stefan,

> On Nov 23, 2016, at 3:55 PM, Stefan Hajnoczi <stefanha@...hat.com> wrote:
> 
> Hi Jorgen,
> There are two use cases where network namespace support in AF_VSOCK
> could be useful:
> 
> 1. Claudio Imbrenda pointed out that a machine cannot act as both host
>   and guest at the same time.  This is necessary for nested
>   virtualization.  Currently only one transport (the host side or the
>   guest side) can be registered at a time.

VMCI based AF_VSOCK relies on the VMCI driver for nested virtualization support. The VMCI driver is a combined host/guest driver with a routing component, that will either direct traffic to VMs managed by the host “personality” of the driver, or to the outer host. So any VMCI driver driver is able to function simultaneously as both a guest and a host driver - exactly to be able to support nested virtualization.

Since, for VMCI based vSocket, the host has a fixed CID (2), any traffic generated by an application inside a VM destined for CID 2 will be routed out of the VM (to the host - either a virtual or physical one). Any traffic for a CID > 2 will be directed towards VMs managed by the host personality of the VMCI driver.

Since VMCI predates nested virtualization, the solution above was partly a result of having to support existing configurations in a transparent way.

> 2. Users may wish to isolate the AF_VSOCK address namespace so that two
>   VMs have completely independent CID and ports (they could even use
>   the same CID and ports because they're in separate namespaces).  This
>   ensures that a host service visible to VM1 is not automatically
>   visible to VM2.

If the goal is to provide fine grained service access control, won’t this end up requiring a namespace per VM? For ESX, we have a mechanism to tag VMs that allows them to be granted access to a service offered through AF_VSOCK, but this is not part of the Linux hypervisor.

If the intent is to be able to support multi tenancy, then this sounds like a better fit. Also, in the multi tenancy case, isolating the other AFs is probably what you want as well.

> Network namespaces could solve both problems.
> 
> A drawback of namespaces is that existing configurations using network
> namespaces for IPv4/6 or other purposes break if AF_VSOCK gains network
> namespace support.  This is not a big problem for virtio-vsock if we
> implement namespace support soon since there are no existing users.
> 
> I wonder how other address families have solved this transition to
> network namespaces.  It's almost like we need fine-grained namespaces
> instead of a blanket network namespace that applies across all address
> families...
> 
> I'm playing around with the code now but wanted to get your thoughts in
> case you've already considered these problems.
> 
> Stefan

Thanks,
Jørgen

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ