lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <BLUPR03MB1410BB9D04D0758840F6685CBF4A0@BLUPR03MB1410.namprd03.prod.outlook.com>
Date:	Thu, 19 May 2016 00:59:09 +0000
From:	Dexuan Cui <decui@...rosoft.com>
To:	David Miller <davem@...emloft.net>,
	KY Srinivasan <kys@...rosoft.com>
CC:	"olaf@...fle.de" <olaf@...fle.de>,
	"gregkh@...uxfoundation.org" <gregkh@...uxfoundation.org>,
	"jasowang@...hat.com" <jasowang@...hat.com>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"joe@...ches.com" <joe@...ches.com>,
	"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
	"apw@...onical.com" <apw@...onical.com>,
	"devel@...uxdriverproject.org" <devel@...uxdriverproject.org>,
	Haiyang Zhang <haiyangz@...rosoft.com>
Subject: RE: [PATCH v11 net-next 0/1] introduce Hyper-V VM Sockets(hv_sock)

> From: devel [mailto:driverdev-devel-bounces@...uxdriverproject.org] On Behalf
> Of Dexuan Cui
> Sent: Tuesday, May 17, 2016 10:46
> To: David Miller <davem@...emloft.net>
> Cc: olaf@...fle.de; gregkh@...uxfoundation.org; jasowang@...hat.com;
> linux-kernel@...r.kernel.org; joe@...ches.com; netdev@...r.kernel.org;
> apw@...onical.com; devel@...uxdriverproject.org; Haiyang Zhang
> <haiyangz@...rosoft.com>
> Subject: RE: [PATCH v11 net-next 0/1] introduce Hyper-V VM Sockets(hv_sock)
> 
> > From: David Miller [mailto:davem@...emloft.net]
> > Sent: Monday, May 16, 2016 1:16
> > To: Dexuan Cui <decui@...rosoft.com>
> > Cc: gregkh@...uxfoundation.org; netdev@...r.kernel.org; linux-
> > kernel@...r.kernel.org; devel@...uxdriverproject.org; olaf@...fle.de;
> > apw@...onical.com; jasowang@...hat.com; cavery@...hat.com; KY
> > Srinivasan <kys@...rosoft.com>; Haiyang Zhang <haiyangz@...rosoft.com>;
> > joe@...ches.com; vkuznets@...hat.com
> > Subject: Re: [PATCH v11 net-next 0/1] introduce Hyper-V VM Sockets(hv_sock)
> >
> > From: Dexuan Cui <decui@...rosoft.com>
> > Date: Sun, 15 May 2016 09:52:42 -0700
> >
> > > Changes since v10
> > >
> > > 1) add module params: send_ring_page, recv_ring_page. They can be used
> to
> > > enlarge the ringbuffer size to get better performance, e.g.,
> > > # modprobe hv_sock  recv_ring_page=16 send_ring_page=16
> > > By default, recv_ring_page is 3 and send_ring_page is 2.
> > >
> > > 2) add module param max_socket_number (the default is 1024).
> > > A user can enlarge the number to create more than 1024 hv_sock sockets.
> > > By default, 1024 sockets take about 1024 * (3+2+1+1) * 4KB = 28M bytes.
> > > (Here 1+1 means 1 page for send/recv buffers per connection, respectively.)
> >
> > This is papering around my objections, and create module parameters which
> > I am fundamentally against.
> >
> > You're making the facility unusable by default, just to work around my
> > memory consumption concerns.
> >
> > What will end up happening is that everyone will simply increase the
> > values.
> >
> > You're not really addressing the core issue, and I will be ignoring you
> > future submissions of this change until you do.
> 
> David,
> I am sorry I came across as ignoring your feedback; that was not my intention.
> The current host side design for this feature is such that each socket connection
> needs its own channel, which consists of
> 
> 1.    A ring buffer for host to guest communication
> 2.    A ring buffer for guest to host communication
> 
> The memory for the ring buffers has to be pinned down as this will be accessed
> both from interrupt level in Linux guest and from the host OS at any time.
> 
> To address your concerns, I am planning to re-implement both the receive path
> and the send path so that no additional pinned memory will be needed.
> 
> Receive Path:
> When the application does a read on the socket, we will dynamically allocate
> the buffer and perform the read operation on the incoming ring buffer. Since
> we will be in the process context, we can sleep here and will set the
> "GFP_KERNEL | __GFP_NOFAIL" flags. This buffer will be freed once the
> application consumes all the data.
> 
> Send Path:
> On the send side, we will construct the payload to be sent directly on the
> outgoing ringbuffer.
> 
> So, with these changes, the only memory that will be pinned down will be the
> memory for the ring buffers on a per-connection basis and this memory will be
> pinned down until the connection is torn down.
> 
> Please let me know if this addresses your concerns.
> 
> -- Dexuan

Hi David,
Ping. Really appreciate your comment.

 Thanks,
-- Dexuan

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ