[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200213113949.GA544499@stefanha-x1.localdomain>
Date: Thu, 13 Feb 2020 11:39:49 +0000
From: Stefan Hajnoczi <stefanha@...hat.com>
To: "Boeuf, Sebastien" <sebastien.boeuf@...el.com>
Cc: "sgarzare@...hat.com" <sgarzare@...hat.com>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
"davem@...emloft.net" <davem@...emloft.net>
Subject: Re: [PATCH] net: virtio_vsock: Fix race condition between bind and
listen
On Thu, Feb 13, 2020 at 10:44:18AM +0000, Boeuf, Sebastien wrote:
> On Thu, 2020-02-13 at 11:22 +0100, Stefano Garzarella wrote:
> > On Thu, Feb 13, 2020 at 09:51:36AM +0000, Boeuf, Sebastien wrote:
> > > Hi Stefano,
> > >
> > > On Thu, 2020-02-13 at 10:41 +0100, Stefano Garzarella wrote:
> > > > Hi Sebastien,
> > > >
> > > > On Thu, Feb 13, 2020 at 09:16:11AM +0000, Boeuf, Sebastien wrote:
> > > > > From 2f1276d02f5a12d85aec5adc11dfe1eab7e160d6 Mon Sep 17
> > > > > 00:00:00
> > > > > 2001
> > > > > From: Sebastien Boeuf <sebastien.boeuf@...el.com>
> > > > > Date: Thu, 13 Feb 2020 08:50:38 +0100
> > > > > Subject: [PATCH] net: virtio_vsock: Fix race condition between
> > > > > bind
> > > > > and listen
> > > > >
> > > > > Whenever the vsock backend on the host sends a packet through
> > > > > the
> > > > > RX
> > > > > queue, it expects an answer on the TX queue. Unfortunately,
> > > > > there
> > > > > is one
> > > > > case where the host side will hang waiting for the answer and
> > > > > will
> > > > > effectively never recover.
> > > >
> > > > Do you have a test case?
> > >
> > > Yes I do. This has been a bug we've been investigating on Kata
> > > Containers for quite some time now. This was happening when using
> > > Kata
> > > along with Cloud-Hypervisor (which rely on the hybrid vsock
> > > implementation from Firecracker). The thing is, this bug is very
> > > hard
> > > to reproduce and was happening for Kata because of the connection
> > > strategy. The kata-runtime tries to connect a million times after
> > > it
> > > started the VM, just hoping the kata-agent will start to listen
> > > from
> > > the guest side at some point.
> >
> > Maybe is related to something else. I tried the following which
> > should be
> > your case simplified (IIUC):
> >
> > guest$ python
> > import socket
> > s = socket.socket(socket.AF_VSOCK, socket.SOCK_STREAM)
> > s.bind((socket.VMADDR_CID_ANY, 1234))
> >
> > host$ python
> > import socket
> > s = socket.socket(socket.AF_VSOCK, socket.SOCK_STREAM)
> > s.connect((3, 1234))
> >
> > Traceback (most recent call last):
> > File "<stdin>", line 1, in <module>
> > TimeoutError: [Errno 110] Connection timed out
>
> Yes this is exactly the simplified case. But that's the point, I don't
> think the timeout is the best way to go here. Because this means that
> when we run into this case, the host side will wait for quite some time
> before retrying, which can cause a very long delay before the
> communication with the guest is established. By simply answering the
> host with a RST packet, we inform it that nobody's listening on the
> guest side yet, therefore the host side will close and try again.
My expectation is that TCP/IP will produce ECONNREFUSED in this case but
I haven't checked. Timing out is weird behavior.
In any case, the reference for virtio-vsock semantics is:
1. How does VMCI (VMware) vsock behave? We strive to be compatible with
the VMCI transport.
2. If there is no clear VMCI behavior, then we look at TCP/IP because
those semantics are expected by most applications.
This bug needs a test case in tools/testings/vsock/ and that test case
will run against VMCI, virtio-vsock, and Hyper-V. Doing that will
answer the question of how VMCI handles this case.
Stefan
Download attachment "signature.asc" of type "application/pgp-signature" (489 bytes)
Powered by blists - more mailing lists