lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <529FA334.4050202@redhat.com>
Date:	Wed, 04 Dec 2013 22:48:36 +0100
From:	Thomas Graf <tgraf@...hat.com>
To:	Ben Pfaff <blp@...ira.com>
CC:	jesse@...ira.com, dev@...nvswitch.org, netdev@...r.kernel.org,
	dborkman@...hat.com, ffusco@...hat.com, fleitner@...hat.com,
	xiyou.wangcong@...il.com
Subject: Re: [PATCH openvswitch v3] netlink: Implement & enable memory mapped
 netlink i/o

On 12/04/2013 07:08 PM, Ben Pfaff wrote:
> On Wed, Dec 04, 2013 at 06:20:53PM +0100, Thomas Graf wrote:
>> How about we limit the number of mmaped sockets to a configurable
>> maximum that defaults to 16 or 32?
>
> Maybe you mean that we should only mmap some of the sockets that we
> create.  If so, this approach is reasonable,

Yes, that's what I meant.

> if one can come up with a
> good heuristic to decide which sockets should be mmaped.  One place
> one could start would be to mmap the sockets that correspond to
> physical ports.

That sounds reasonable, e.g. I would assume ports connected to tap
devices to produce only a limited number of upcalls anyway.

We can also consider enabling/disabling mmaped rings on demand based
on upcall statistics.

> Maybe you mean that we should only create 16 or 32 Netlink sockets,
> and divide the datapath ports among those sockets.  OVS once used this
> approach.  We stopped using it because it has problems with fairness:
> if two ports are assigned to one socket, and one of those ports has a
> huge volume of new flows (or otherwise sends a lot of packets to
> userspace), then it can drown out the occasional packet from the other
> port.  We keep talking about new, more flexible approaches to
> achieving fairness, though, and maybe some of those approaches would
> allow us to reduce the number of sockets we need, which would make
> mmaping all of them feasible.

I can see the fairness issue. It will result in a large amount of open
file descriptors though. I doubt this will scale much beyond 16K ports,
correct?
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ