[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20120623205546.GA15964@ms2.inr.ac.ru>
Date: Sun, 24 Jun 2012 00:55:46 +0400
From: Alexey Kuznetsov <kuznet@....inr.ac.ru>
To: Eric Dumazet <eric.dumazet@...il.com>
Cc: David Miller <davem@...emloft.net>, johunt@...mai.com,
kaber@...sh.net, dbavatar@...il.com, netdev@...r.kernel.org,
yoshfuji@...ux-ipv6.org, jmorris@...ei.org, pekkas@...core.fi,
linux-kernel@...r.kernel.org, Ben Greear <greearb@...delatech.com>
Subject: Re: Bug in net/ipv6/ip6_fib.c:fib6_dump_table()
On Sat, Jun 23, 2012 at 07:37:31AM +0200, Eric Dumazet wrote:
> All other /proc/net files don't have a such sophisticated walkers aware
> mechanism
I can explain why.
IPv6 routing table has a capital management drawback: core policy rules are mixed
with dynamic cache and addrconf routes in one structure.
(BTW it is one of reasons why I did not want to integrate routing cache to fib for IPv4)
Do you see the problem? F.e. when you do iptables-save, you do not expect
that it can occasionally miss some rules (unless you mess with it in parallel, of course)
The same is here. When you dump routing table, you are allowed to miss some cache routes,
but if you have a chance to miss at least one of important routes just because
unimportant dynamic part is alway under change, it is fatal.
There are a lot of ways to solve the problem, all of them have some flaws.
F.e. I can remember:
* atomic dump like bsd sysctl.
* keeping administrative routes in a separate list, which can be walked using skip/count
etc.
This way with walkers I chose because it looked quite optimal and because
it was an exciting little task for brains . :-)
> (easily DOSable by the way, if some guy opens 10.000 handles
> and suspend in the middle the dumps).
This is true. The easiest way to fix this is just to limit amount of readers,
putting them on hold.
Alexey
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists