[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAEP_g=_Kb6_iVwxwBvGzg+RkcE7JRuO2hGxc4pcSELVfn7qWFw@mail.gmail.com>
Date: Sat, 1 Jun 2013 15:39:12 +0900
From: Jesse Gross <jesse@...ira.com>
To: David Stevens <dlstevens@...ibm.com>
Cc: netdev <netdev@...r.kernel.org>,
Stephen Hemminger <stephen@...workplumber.org>
Subject: Re: RFC - VXLAN port range facility
On Fri, May 31, 2013 at 9:26 PM, David Stevens <dlstevens@...ibm.com> wrote:
> Jesse Gross <jesse@...ira.com> wrote on 05/31/2013 02:09:34 AM:
>
>> On Fri, May 31, 2013 at 3:00 AM, David Stevens <dlstevens@...ibm.com>
> wrote:
>> > But I don't think there's particular advantage in splitting it up
> 30,000
>> > ways when 10 ways would be both practical, for binding, and spread
>> > traffic to 10 flows potentially.
>>
>> Most people that run large data centers think that 16 bits of entropy
>> is barely sufficient. The issue is not CPUs or link aggregation but
>> Clos fabrics built using ECMP.
>>
>
> And most people running embedded systems wouldn't want to bind to
> 30,000 sockets by default, which is the proper way for VXLAN to
> interact with UDP.
>
> A casual user of VXLAN between a couple of small machines on ordinary
> Ethernet generally won't require multiple ports at all.
>
> I think the default case should lean towards the low end, and the
> mechanisms are there to tune the high end.
This line of argument doesn't make a lot of sense because scalability
and ECMP are the two reason that VXLAN was introduced in the first
place. Without the entropy in the source port, it's basically the same
as GRE.
Your solution needs to work reasonably across the entire range of use
cases and based on your arguments, it clearly doesn't. This doesn't
mean that those use cases don't exist or aren't important, it means
that you need to find another solution.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists