[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CACP96tQMEQcTPZaCT7mehis-H5Sw_-74F6fSvZxB7Lr3Hqo0kQ@mail.gmail.com>
Date: Fri, 23 May 2014 08:53:18 -0400
From: sowmini varadhan <sowmini05@...il.com>
To: Niels Möller <nisse@...thpole.se>
Cc: netdev <netdev@...r.kernel.org>, Jonas Bonn <jonas@...thpole.se>
Subject: Re: What's the right way to use a *large* number of source addresses?
On Fri, May 23, 2014 at 5:38 AM, Niels Möller <nisse@...thpole.se> wrote:
> 1. Simply assign all addresses to be used to the interface, fixing any
> remaining performance problems.
>
> I've done a simple benchmark with a script assigning n addresses
> using "ip address add", and this seems to have O(n^2) complexity.
> E.g, assigning n=25500 addresses took 26 s, and doubling n, assigning
> 51000 addresses, took 122 s, 4.6 times longer. Which isn't
> necessarily a problems once all the addresses are assigned, but it
> sounds a bit like there's a linear datastructure in there, not
> intended for a large number of addresses.
I think the issue here is the netlink API: if you try to do the same thing
with ifconfig (instead of /sbin/ip), you'll find things are much faster,
which seems paradoxical because ip(7) and netlink sockets are the
recommended config paths. When I ran into this, it turned out that the
difference was due to the implementation- the ioctl path was able to
use the ifr_name efficiently to identify the target interface, unlike
the netlink config path.
But to solve your specific problem, since you are implementing
a packet generator, wouldnt it be easier to craft the packet and feed
it over RAW or PF_PACKET sockets, allowing you more flexibility
(if you need it down the road) to set all fields, not just the
IP source address? Might mean a bit more work in your user-space,
to compute checksums etc. though.
--Sowmini
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists