[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190903125547.GH18865@dhcp-12-139.nay.redhat.com>
Date: Tue, 3 Sep 2019 20:55:47 +0800
From: Hangbin Liu <liuhangbin@...il.com>
To: Nikolay Aleksandrov <nikolay@...ulusnetworks.com>
Cc: netdev@...r.kernel.org, Phil Karn <karn@...q.net>,
Sukumar Gopalakrishnan <sukumarg1973@...il.com>,
"David S . Miller" <davem@...emloft.net>,
Alexey Kuznetsov <kuznet@....inr.ac.ru>,
Hideaki YOSHIFUJI <yoshfuji@...ux-ipv6.org>
Subject: Re: [PATCH net] ipmr: remove cache_resolve_queue_len
Hi Nikolay,
Thanks for the feedback, see comments below.
On Tue, Sep 03, 2019 at 12:15:34PM +0300, Nikolay Aleksandrov wrote:
> On 03/09/2019 11:43, Hangbin Liu wrote:
> > This is a re-post of previous patch wrote by David Miller[1].
> >
> > Phil Karn reported[2] that on busy networks with lots of unresolved
> > multicast routing entries, the creation of new multicast group routes
> > can be extremely slow and unreliable.
> >
> > The reason is we hard-coded multicast route entries with unresolved source
> > addresses(cache_resolve_queue_len) to 10. If some multicast route never
> > resolves and the unresolved source addresses increased, there will
> > be no ability to create new multicast route cache.
> >
> > To resolve this issue, we need either add a sysctl entry to make the
> > cache_resolve_queue_len configurable, or just remove cache_resolve_queue_len
> > directly, as we already have the socket receive queue limits of mrouted
> > socket, pointed by David.
> >
> > From my side, I'd perfer to remove the cache_resolve_queue_len instead
> > of creating two more(IPv4 and IPv6 version) sysctl entry.
> >
> > [1] https://lkml.org/lkml/2018/7/22/11
> > [2] https://lkml.org/lkml/2018/7/21/343
> >
> > Reported-by: Phil Karn <karn@...q.net>
> > Signed-off-by: Hangbin Liu <liuhangbin@...il.com>
> > ---
> > include/linux/mroute_base.h | 2 --
> > net/ipv4/ipmr.c | 27 ++++++++++++++++++---------
> > net/ipv6/ip6mr.c | 10 +++-------
> > 3 files changed, 21 insertions(+), 18 deletions(-)
> >
>
> Hi,
> IMO this is definitely net-next material. A few more comments below.
I thoug this is a bug fix. But it looks more suitable to net-next as you said.
>
> Note that hosts will automatically have this limit lifted to about 270
> entries with current defaults, some might be surprised if they have
> higher limits set and could be left with queues allowing for thousands
> of entries in a linked list.
I think this is just a cache list and should be expired soon. The cache
creation would also failed if there is no buffer.
But if you like, I can write a patch with sysctl parameter.
>
> > +static int queue_count(struct mr_table *mrt)
> > +{
> > + struct list_head *pos;
> > + int count = 0;
> > +
> > + list_for_each(pos, &mrt->mfc_unres_queue)
> > + count++;
> > + return count;
> > +}
>
> I don't think we hold the mfc_unres_lock here while walking
> the unresolved list below in ipmr_fill_table().
ah, yes, I will fix this.
Thanks
Hangbin
Powered by blists - more mailing lists