[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <529F6475.3090903@redhat.com>
Date: Wed, 04 Dec 2013 18:20:53 +0100
From: Thomas Graf <tgraf@...hat.com>
To: Ben Pfaff <blp@...ira.com>
CC: jesse@...ira.com, dev@...nvswitch.org, netdev@...r.kernel.org,
dborkman@...hat.com, ffusco@...hat.com, fleitner@...hat.com,
xiyou.wangcong@...il.com
Subject: Re: [PATCH openvswitch v3] netlink: Implement & enable memory mapped
netlink i/o
On 12/04/2013 05:33 PM, Ben Pfaff wrote:
> If I'm doing the calculations correctly, this mmaps 8 MB per ring-based
> Netlink socket on a system with 4 kB pages. OVS currently creates one
> Netlink socket for each datapath port. With 1000 ports (a moderate
> number; we sometimes test with more), that is 8 GB of address space. On
> a 32-bit architecture that is impossible. On a 64-bit architecture it
> is possible but it may reserve an actual 8 GB of RAM: OVS often runs
> with mlockall() since it is something of a soft real-time system (users
> don't want their packet delivery delayed to page data back in).
>
> Do you have any thoughts about this issue?
That's certainly a problem. I had the impression that the changes that
allow to consolidate multiple bridges to a single DP would minimize the
number of DPs used.
How about we limit the number of mmaped sockets to a configurable
maximum that defaults to 16 or 32?
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists