[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <1494800422.2803.4.camel@sipsolutions.net>
Date: Mon, 15 May 2017 00:20:22 +0200
From: Johannes Berg <johannes@...solutions.net>
To: David Ahern <dsahern@...il.com>, Jan Moskyto Matejka <mq@....cz>
Cc: David Miller <davem@...emloft.net>, mq@....cz,
netdev@...r.kernel.org, roopa <roopa@...ulusnetworks.com>
Subject: Re: [PATCH] net: ipv6: Truncate single route when it doesn't fit
into dump buffer.
On Sun, 2017-05-14 at 16:14 -0600, David Ahern wrote:
> On 5/14/17 3:00 PM, Johannes Berg wrote:
> > On Sat, 2017-05-13 at 19:29 +0200, Jan Moskyto Matejka wrote:
> > >
> > > > When adding a route to the skb, track whether it contains at
> > > > least
> > > > 1
> > > > route. If not, it means the next route in the dump is larger
> > > > than
> > > > the
> > > > given buffer. Detect this condition and error out of the dump -
> > > > returning an error to the user (-ENOSPC? or EMSGSIZE?)
> > >
> > > EMSGSIZE seems OK for me.
> >
> > If we return an error here, and consequently allow for userspace
> > changes to pick this up, perhaps we could also consider allowing to
> > split the dump between nexthops, so that arbitrary such things can
> > be
> > returned.
>
> Returning an error should not impact existing userspace; it should
> already be checking for an error response in the message.
Right. I was sort of thinking that you'd catch the error and try with a
bigger buffer or so.
> Splitting the dump between nexthops across multiple messages will
> have repercussions on userspace. I think (at least for rtnetlink and
> links, addresses, routes) userspace needs to provide a buffer large
> enough for a complete object. If we limit the number of nexthops to
> something reasonable (e.g., 256), then ipv4 for example will be ~ 3kB
> + lwt encap size we are talking on the order of an 8kb maybe 16kB
> buffer. That is a reasonable request for the API.
Yeah, that seems reasonable.
We had a bigger problem here, in that the # of channels supported, and
the amount of data per channel, was increasing all the time.
johannes
Powered by blists - more mailing lists