[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAHapkUhEVGFRH=-R+obSUMh6rSVxZmEC9GQSnvUkvuj6dwVxjA@mail.gmail.com>
Date: Sat, 27 Jul 2019 13:03:31 -0400
From: Stephen Suryaputra <ssuryaextr@...il.com>
To: Nikolay Aleksandrov <nikolay@...ulusnetworks.com>
Cc: Brodie Greenfield <brodie.greenfield@...iedtelesis.co.nz>,
David Miller <davem@...emloft.net>,
Stephen Hemminger <stephen@...workplumber.org>,
kuznet@....inr.ac.ru, yoshfuji@...ux-ipv6.org,
netdev <netdev@...r.kernel.org>, linux-kernel@...r.kernel.org,
chris.packham@...iedtelesis.co.nz,
luuk.paulussen@...iedtelesis.co.nz
Subject: Re: [PATCH 1/2] ipmr: Make cache queue length configurable
On Fri, Jul 26, 2019 at 7:18 AM Nikolay Aleksandrov
<nikolay@...ulusnetworks.com> wrote:
> > You've said it yourself - it has linear traversal time, but doesn't this patch allow any netns on the
> > system to increase its limit to any value, thus possibly affecting others ?
> > Though the socket limit will kick in at some point. I think that's where David
> > was going with his suggestion back in 2018:
> > https://www.spinics.net/lists/netdev/msg514543.html
> >
> > If we add this sysctl now, we'll be stuck with it. I'd prefer David's suggestion
> > so we can rely only on the receive queue queue limit which is already configurable.
> > We still need to be careful with the defaults though, the NOCACHE entry is 128 bytes
> > and with the skb overhead currently on my setup we end up at about 277 entries default limit.
>
> I mean that people might be surprised if they increased that limit by default, that's the
> only problem I'm not sure how to handle. Maybe we need some hard limit anyway.
> Have you done any tests what value works for your setup ?
FYI: for ours, it is 2048.
Powered by blists - more mailing lists