[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <31d2369255894515bd040d22452d9df8@SIXPR30MB031.064d.mgd.msft.net>
Date: Wed, 8 Jul 2015 03:56:52 +0000
From: Dexuan Cui <decui@...rosoft.com>
To: Stephen Hemminger <stephen@...workplumber.org>
CC: "gregkh@...uxfoundation.org" <gregkh@...uxfoundation.org>,
"davem@...emloft.net" <davem@...emloft.net>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"driverdev-devel@...uxdriverproject.org"
<driverdev-devel@...uxdriverproject.org>,
"olaf@...fle.de" <olaf@...fle.de>,
"apw@...onical.com" <apw@...onical.com>,
"jasowang@...hat.com" <jasowang@...hat.com>,
KY Srinivasan <kys@...rosoft.com>,
Haiyang Zhang <haiyangz@...rosoft.com>
Subject: RE: [PATCH 6/7] hvsock: introduce Hyper-V VM Sockets feature
> -----Original Message-----
> From: Stephen Hemminger
> Sent: Wednesday, July 8, 2015 2:31
> Subject: Re: [PATCH 6/7] hvsock: introduce Hyper-V VM Sockets feature
>
> On Mon, 6 Jul 2015 07:47:29 -0700
> Dexuan Cui <decui@...rosoft.com> wrote:
>
> > Hyper-V VM sockets (hvsock) supplies a byte-stream based communication
> > mechanism between the host and a guest. It's kind of TCP over VMBus, but
> > the transportation layer (VMBus) is much simpler than IP. With Hyper-V VM
> > Sockets, applications between the host and a guest can talk with each
> > other directly by the traditional BSD-style socket APIs.
> >
> > Hyper-V VM Sockets is only available on Windows 10 host and later. The
> > patch implements the necessary support in the guest side by introducing
> > a new socket address family AF_HYPERV.
> >
> > Signed-off-by: Dexuan Cui <decui@...rosoft.com>
>
> Is there any chance that AF_VSOCK could be used with different transport
> for VMware and Hyper-V. Better to make guest applications host independent.
Hi Stephen,
Thanks for the question. I tried to do that (since AF_HYPERV and AF_VSOCK
are conceptually similar), but I found it would be impractical: I listed the
reasons in my cover letter of the patchset:
https://lkml.org/lkml/2015/7/6/431
IMO the biggest difference is the size of the endpoint (u128 vs. u32):
<u32 ContextID, u32 Port> in AF_VOSCK
vs.
<u128 GUID_VM_ID, u128 GUID_ServiceID> in AF_HYPERV.
In the current code of AF_VSOCK and the related transport layer (the wrapper
ops of VMware's VMCI), the size is widely used by "struct sockaddr_vm" (this
struct is also exported to the user space).
So, anyway, the user space application has to explicitly handle the different
endpoint size.
And in the driver side, I'm afraid there is no way to directly reuse the code of
AF_VSOCK with trivial change :-( , because we would have to make the
AF_VSOCK code be able to know the real sockaddr type (sockaddr_vm or
sockaddr_hv? The two structs have different layout and different field names)
at runtime and behave differently. This would make the code a mess, IMO.
That's why I think it would be better to introduce a new address family.
Thanks,
-- Dexuan
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists