[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4b22a3bc-9dae-3f49-6748-ec45deb09a01@gmail.com>
Date: Wed, 20 May 2020 10:54:21 -0600
From: David Ahern <dsahern@...il.com>
To: Christian Brauner <christian.brauner@...ntu.com>,
"David S. Miller" <davem@...emloft.net>
Cc: Alexey Kuznetsov <kuznet@....inr.ac.ru>,
Hideaki YOSHIFUJI <yoshfuji@...ux-ipv6.org>,
Jakub Kicinski <kuba@...nel.org>, netdev@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH net-next] ipv6/route: inherit max_sizes from current netns
On 5/20/20 8:58 AM, Christian Brauner wrote:
> During NorthSec (cf. [1]) a very large number of unprivileged
> containers and nested containers are run during the competition to
> provide a safe environment for the various teams during the event. Every
> year a range of feature requests or bug reports come out of this and
> this year's no different.
> One of the containers was running a simple VPN server. There were about
> 1.5k users connected to this VPN over ipv6 and the container was setup
> with about 100 custom routing tables when it hit the max_sizes routing
> limit. After this no new connections could be established anymore,
> pinging didn't work anymore; you get the idea.
>
should have been addressed by:
commit d8882935fcae28bceb5f6f56f09cded8d36d85e6
Author: Eric Dumazet <edumazet@...gle.com>
Date: Fri May 8 07:34:14 2020 -0700
ipv6: use DST_NOCOUNT in ip6_rt_pcpu_alloc()
We currently have to adjust ipv6 route gc_thresh/max_size depending
on number of cpus on a server, this makes very little sense.
Did your tests include this patch?
Powered by blists - more mailing lists