[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20131204163328.GE30874@nicira.com>
Date: Wed, 4 Dec 2013 08:33:28 -0800
From: Ben Pfaff <blp@...ira.com>
To: Thomas Graf <tgraf@...hat.com>
Cc: jesse@...ira.com, dev@...nvswitch.org, netdev@...r.kernel.org,
dborkman@...hat.com, ffusco@...hat.com, fleitner@...hat.com,
xiyou.wangcong@...il.com
Subject: Re: [PATCH openvswitch v3] netlink: Implement & enable memory mapped
netlink i/o
On Tue, Dec 03, 2013 at 12:19:02PM +0100, Thomas Graf wrote:
> Based on the initial patch by Cong Wang posted a couple of months
> ago.
>
> This is the user space counterpart needed for the kernel patch
> '[PATCH net-next 3/8] openvswitch: Enable memory mapped Netlink i/o'
>
> Allows the kernel to construct Netlink messages on memory mapped
> buffers and thus avoids copying. The functionality is enabled on
> sockets used for unicast traffic.
>
> Further optimizations are possible by avoiding the copy into the
> ofpbuf after reading.
>
> Signed-off-by: Thomas Graf <tgraf@...hat.com>
If I'm doing the calculations correctly, this mmaps 8 MB per ring-based
Netlink socket on a system with 4 kB pages. OVS currently creates one
Netlink socket for each datapath port. With 1000 ports (a moderate
number; we sometimes test with more), that is 8 GB of address space. On
a 32-bit architecture that is impossible. On a 64-bit architecture it
is possible but it may reserve an actual 8 GB of RAM: OVS often runs
with mlockall() since it is something of a soft real-time system (users
don't want their packet delivery delayed to page data back in).
Do you have any thoughts about this issue?
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists