lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Wed, 3 Jan 2018 16:09:43 +0000
From:   "Jorgen S. Hansen" <jhansen@...are.com>
To:     Stefan Hajnoczi <stefanha@...hat.com>
CC:     "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
        Dexuan Cui <decui@...rosoft.com>
Subject: Re: [PATCH 0/5] VSOCK: add vsock_test test suite


> On Jan 2, 2018, at 1:05 PM, Stefan Hajnoczi <stefanha@...hat.com> wrote:
> 
> On Wed, Dec 20, 2017 at 02:48:43PM +0000, Jorgen S. Hansen wrote:
>> 
>>> On Dec 13, 2017, at 3:49 PM, Stefan Hajnoczi <stefanha@...hat.com> wrote:
>>> 
>>> The vsock_diag.ko module already has a test suite but the core AF_VSOCK
>>> functionality has no tests.  This patch series adds several test cases that
>>> exercise AF_VSOCK SOCK_STREAM socket semantics (send/recv, connect/accept,
>>> half-closed connections, simultaneous connections).
>>> 
>>> The test suite is modest but I hope to cover additional cases in the future.
>>> My goal is to have a shared test suite so VMCI, Hyper-V, and KVM can ensure
>>> that our transports behave the same.
>>> 
>>> I have tested virtio-vsock.
>>> 
>>> Jorgen: Please test the VMCI transport and let me know if anything needs to be
>>> adjusted.  See tools/testing/vsock/README for information on how to run the
>>> test suite.
>>> 
>> 
>> I tried running the vsock_test on VMCI, and all the tests failed in one way or
>> another:
> 
> Great, thank you for testing and looking into the failures!
> 
>> 1) connection reset test: when the guest tries to connect to the host, we
>>  get EINVAL as the error instead of ECONNRESET. I’ll fix that.
> 
> Yay, the tests found a bug!
> 
>> 2) client close and server close tests: On the host side, VMCI doesn’t
>>  support reading data from a socket that has been closed by the
>>  guest. When the guest closes a connection, all data is gone, and
>>  we return EOF on the host side. So the tests that try to read data
>>  after close, should not attempt that on VMCI host side. I got the
>>  tests to pass by adding a getsockname call to determine if
>>  the local CID was the host CID, and then skip the read attempt
>>  in that case. We could add a vmci flag, that would enable
>>  this behavior.
> 
> Interesting behavior.  Is there a reason for disallowing half-closed
> sockets on the host side?

This is a consequence of the way the underlying VMCI queue pairs are
implemented. When the guest side closes a connection, it signals this
to the peer by detaching from the VMCI queue pair used for the data
transfer (the detach will result in an event being generated on the
peer side). However, the VMCI queue pair is allocated as part of the
guest memory, so when the guest detaches, that memory is reclaimed.
So the host side would need to create a copy of the contents of that
queue pair in kernel memory as part of the detach operation. When
this was implemented, it was decided that it was better to avoid
a potential large kernel memory allocation and the data copy at
detach time than to maintain the half close behavior of INET.

> 
>> 5) multiple connections tests: with the standard socket sizes,
>>  VMCI is only able to support about 100 concurrent stream
>>  connections so this test passes with MULTICONN_NFDS
>>  set to 100.
> 
> The 1000 magic number is because many distros have a maximum number of
> file descriptors ulimit of 1024.  But it's an arbitrary number and we
> could lower it to 100.
> 
> Is this VMCI concurrent stream limit a denial of service vector?  Can an
> unprivileged guest userspace process open many sockets to prevent
> legitimate connections from other users within the same guest?

vSocket uses VMCI queue pairs for the stream, and the VMCI device
only allows a limited amount of memory to be used for queue pairs
per VM. So it is possible to exhaust this shared resource. The queue
pairs are created as part of the connection establishment process, so
it would require the user process to both create and connect the sockets
to a host side endpoint (connections between guest processes will not
allocate VMCI queue pairs).

Thanks,
Jorgen

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ