lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CALeUXe5ZCfJcHPK98xBcd=NHkZGfc_SMjg-unffbvn+yeKf5qw@mail.gmail.com>
Date:   Thu, 10 Mar 2022 21:57:28 +0900
From:   Jiyong Park <jiyong@...gle.com>
To:     "Michael S. Tsirkin" <mst@...hat.com>
Cc:     Stefan Hajnoczi <stefanha@...hat.com>,
        Stefano Garzarella <sgarzare@...hat.com>,
        Jason Wang <jasowang@...hat.com>,
        "David S. Miller" <davem@...emloft.net>,
        Jakub Kicinski <kuba@...nel.org>, adelva@...gle.com,
        kvm@...r.kernel.org, virtualization@...ts.linux-foundation.org,
        netdev@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 1/2] vsock: each transport cycles only on its own sockets

My bad. I mistakenly omitted to and cc for the cover letter. Fixed.

On Thu, Mar 10, 2022 at 9:55 PM Michael S. Tsirkin <mst@...hat.com> wrote:
>
> On Thu, Mar 10, 2022 at 07:53:25AM -0500, Michael S. Tsirkin wrote:
> > This message had
> > In-Reply-To: <20220310124936.4179591-1-jiyong@...gle.com>
> > in its header but 20220310124936.4179591-2-jiyong@...gle.com was
> > not sent to the list.
> > Please don't do that. Instead, please write and send a proper
> > cover letter. Thanks!
> >
>
>
> Also, pls version in subject e.g. PATCH v2, and include
> full changelog in the cover letter. Thanks!
>
> > On Thu, Mar 10, 2022 at 09:49:35PM +0900, Jiyong Park wrote:
> > > When iterating over sockets using vsock_for_each_connected_socket, make
> > > sure that a transport filters out sockets that don't belong to the
> > > transport.
> > >
> > > There actually was an issue caused by this; in a nested VM
> > > configuration, destroying the nested VM (which often involves the
> > > closing of /dev/vhost-vsock if there was h2g connections to the nested
> > > VM) kills not only the h2g connections, but also all existing g2h
> > > connections to the (outmost) host which are totally unrelated.
> > >
> > > Tested: Executed the following steps on Cuttlefish (Android running on a
> > > VM) [1]: (1) Enter into an `adb shell` session - to have a g2h
> > > connection inside the VM, (2) open and then close /dev/vhost-vsock by
> > > `exec 3< /dev/vhost-vsock && exec 3<&-`, (3) observe that the adb
> > > session is not reset.
> > >
> > > [1] https://android.googlesource.com/device/google/cuttlefish/
> > >
> > > Fixes: c0cfa2d8a788 ("vsock: add multi-transports support")
> > > Signed-off-by: Jiyong Park <jiyong@...gle.com>
> > > ---
> > >  drivers/vhost/vsock.c            | 4 ++++
> > >  net/vmw_vsock/virtio_transport.c | 7 +++++++
> > >  net/vmw_vsock/vmci_transport.c   | 5 +++++
> > >  3 files changed, 16 insertions(+)
> > >
> > > diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c
> > > index 37f0b4274113..853ddac00d5b 100644
> > > --- a/drivers/vhost/vsock.c
> > > +++ b/drivers/vhost/vsock.c
> > > @@ -722,6 +722,10 @@ static void vhost_vsock_reset_orphans(struct sock *sk)
> > >      * executing.
> > >      */
> > >
> > > +   /* Only handle our own sockets */
> > > +   if (vsk->transport != &vhost_transport.transport)
> > > +           return;
> > > +
> > >     /* If the peer is still valid, no need to reset connection */
> > >     if (vhost_vsock_get(vsk->remote_addr.svm_cid))
> > >             return;
> > > diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c
> > > index fb3302fff627..61b24eb31d4b 100644
> > > --- a/net/vmw_vsock/virtio_transport.c
> > > +++ b/net/vmw_vsock/virtio_transport.c
> > > @@ -24,6 +24,7 @@
> > >  static struct workqueue_struct *virtio_vsock_workqueue;
> > >  static struct virtio_vsock __rcu *the_virtio_vsock;
> > >  static DEFINE_MUTEX(the_virtio_vsock_mutex); /* protects the_virtio_vsock */
> > > +static struct virtio_transport virtio_transport; /* forward declaration */
> > >
> > >  struct virtio_vsock {
> > >     struct virtio_device *vdev;
> > > @@ -357,11 +358,17 @@ static void virtio_vsock_event_fill(struct virtio_vsock *vsock)
> > >
> > >  static void virtio_vsock_reset_sock(struct sock *sk)
> > >  {
> > > +   struct vsock_sock *vsk = vsock_sk(sk);
> > > +
> > >     /* vmci_transport.c doesn't take sk_lock here either.  At least we're
> > >      * under vsock_table_lock so the sock cannot disappear while we're
> > >      * executing.
> > >      */
> > >
> > > +   /* Only handle our own sockets */
> > > +   if (vsk->transport != &virtio_transport.transport)
> > > +           return;
> > > +
> > >     sk->sk_state = TCP_CLOSE;
> > >     sk->sk_err = ECONNRESET;
> > >     sk_error_report(sk);
> > > diff --git a/net/vmw_vsock/vmci_transport.c b/net/vmw_vsock/vmci_transport.c
> > > index 7aef34e32bdf..cd2f01513fae 100644
> > > --- a/net/vmw_vsock/vmci_transport.c
> > > +++ b/net/vmw_vsock/vmci_transport.c
> > > @@ -803,6 +803,11 @@ static void vmci_transport_handle_detach(struct sock *sk)
> > >     struct vsock_sock *vsk;
> > >
> > >     vsk = vsock_sk(sk);
> > > +
> > > +   /* Only handle our own sockets */
> > > +   if (vsk->transport != &vmci_transport)
> > > +           return;
> > > +
> > >     if (!vmci_handle_is_invalid(vmci_trans(vsk)->qp_handle)) {
> > >             sock_set_flag(sk, SOCK_DONE);
> > >
> > > --
> > > 2.35.1.723.g4982287a31-goog
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ