[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d0b3f358-4e0a-42f3-84f0-cbcf19066d49@linux.dev>
Date: Sun, 14 Dec 2025 06:22:18 +0900
From: Vadim Fedorenko <vadim.fedorenko@...ux.dev>
To: Willem de Bruijn <willemdebruijn.kernel@...il.com>,
"David S. Miller" <davem@...emloft.net>, David Ahern <dsahern@...nel.org>,
Eric Dumazet <edumazet@...gle.com>, Paolo Abeni <pabeni@...hat.com>,
Simon Horman <horms@...nel.org>, Willem de Bruijn <willemb@...gle.com>,
Jakub Kicinski <kuba@...nel.org>
Cc: Shuah Khan <shuah@...nel.org>, Ido Schimmel <idosch@...dia.com>,
netdev@...r.kernel.org
Subject: Re: [PATCH net 1/2] net: fib: restore ECMP balance from loopback
On 13/12/2025 20:54, Willem de Bruijn wrote:
> Vadim Fedorenko wrote:
>> Preference of nexthop with source address broke ECMP for packets with
>> source address from loopback interface. Original behaviour was to
>> balance over nexthops while now it uses the latest nexthop from the
>> group.
>
> How does the loopback device specifically come into this?
It may be a dummy device as well. The use case is when there are 2
physical interfaces and 1 service IP address, distributed by any
routing protocol. The socket is bound to service, thus it's used in
route selection.
>
>>
>> For the case with 198.51.100.1/32 assigned to lo:
>>
>> before:
>> done | grep veth | awk ' {print $(NF-2)}' | sort | uniq -c:
>> 255 veth3
>>
>> after:
>> done | grep veth | awk ' {print $(NF-2)}' | sort | uniq -c:
>> 122 veth1
>> 133 veth3
>>
>> Fixes: 32607a332cfe ("ipv4: prefer multipath nexthop that matches source address")
>> Signed-off-by: Vadim Fedorenko <vadim.fedorenko@...ux.dev>
>> ---
>> net/ipv4/fib_semantics.c | 21 +++++++++++----------
>> 1 file changed, 11 insertions(+), 10 deletions(-)
>>
>> diff --git a/net/ipv4/fib_semantics.c b/net/ipv4/fib_semantics.c
>> index a5f3c8459758..c54b4ad9c280 100644
>> --- a/net/ipv4/fib_semantics.c
>> +++ b/net/ipv4/fib_semantics.c
>> @@ -2165,9 +2165,9 @@ static bool fib_good_nh(const struct fib_nh *nh)
>> void fib_select_multipath(struct fib_result *res, int hash,
>> const struct flowi4 *fl4)
>> {
>> + bool first = false, found = false;
>> struct fib_info *fi = res->fi;
>> struct net *net = fi->fib_net;
>> - bool found = false;
>> bool use_neigh;
>> __be32 saddr;
>>
>> @@ -2190,23 +2190,24 @@ void fib_select_multipath(struct fib_result *res, int hash,
>> (use_neigh && !fib_good_nh(nexthop_nh)))
>> continue;
>>
>> - if (!found) {
>> + if (saddr && nexthop_nh->nh_saddr == saddr) {
>> res->nh_sel = nhsel;
>> res->nhc = &nexthop_nh->nh_common;
>> - found = !saddr || nexthop_nh->nh_saddr == saddr;
>> + return;
>
> This can return a match that exceeds the upper bound, while better
> matches may exist.
>
> Perhaps what we want is the following:
>
> 1. if there are matches that match saddr, prefer those above others
> - take the first match, as with hash input that results in load
> balancing across flows
>
> 2. else, take any match
> - again, first fit
>
> If no match below fib_nh_upper_bound is found, fall back to the first
> fit above that exceeds nh_upper_bound. Again, prefer first fit of 1 if
> it exists, else first fit of 2.
Oh, I see... in case when there are 2 different nexthops with the same
saddr, we have to balance as well, but with code it will stick to only
first nexthop.
>
> If so then we need up to two concurrent stored options,
> first_match_saddr and first.
That will have to do a bit more assignments.
> Or alternatively use a score similar to inet listener lookup.
I'll check this option
> Since a new variable is added, I would rename found with
> first_match_saddr or similar to document the intent.
Ok.
Powered by blists - more mailing lists