[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250408161756.422830-1-hanhuihui5@huawei.com>
Date: Wed, 9 Apr 2025 00:17:56 +0800
From: hanhuihui <hanhuihui5@...wei.com>
To: <idosch@...sch.org>
CC: <dsahern@...nel.org>, <kuba@...nel.org>, <netdev@...r.kernel.org>
Subject: Re: VRF Routing Rule Matching Issue: oif Rules Not Working After Commit 40867d74c374
On Mon, 7 Apr 2025 11:29:02 +0300, Ido Schimmel wrote:
>On Thu, Apr 03, 2025 at 01:58:46AM +0000, hanhuihui wrote:
>> Dear Kernel Community and Network Maintainers,
>> I am analyzing the issue, and I am very happy for any replies.
>> After the application committed 40867d74c374 ("net: Add l3mdev index to flow struct and avoid oif reset for port devices"), we noticed an unexpected change in VRF routing rule matching behavior. We hereby submit a problem report to confirm whether this is the expected behavior.
>>
>> Problem Description:
>> When interfaces bound to multiple VRFs share the same IP address, the OIF (output interface) routing rule is no longer matched after being committed. As a result, traffic incorrectly matches the low-priority rule.
>> Here are our configuration steps:
>> ip address add 11.47.3.130/16 dev enp4s0
>> ip address add 11.47.3.130/16 dev enp5s0
>>
>> ip link add name vrf-srv-1 type vrf table 10
>> ip link set dev vrf-srv-1 up
>> ip link set dev enp4s0 master vrf-srv-1
>>
>> ip link add name vrf-srv type vrf table 20
>> ip link set dev vrf-srv up
>> ip link set dev enp5s0 master vrf-srv
>>
>> ip rule add from 11.47.3.130 oif vrf-srv-1 table 10 prio 0
>> ip rule add from 11.47.3.130 iif vrf-srv-1 table 10 prio 0
>> ip rule add from 11.47.3.130 table 20 prio 997
>>
>>
>> In this configuration, when the following commands are executed:
>> ip vrf exec vrf-srv-1 ping "11.47.9.250" -I 11.47.3.130
>> Expected behavior: The traffic should match the oif vrf-srv-1 rule of prio 0. Table 10 is used.
>> Actual behavior: The traffic skips the oif rule and matches the default rule of prio 997 (Table 20), causing the ping to fail.
>>
>> Is this the expected behavior?
>> The submission description mentions "avoid oif reset of port devices". Does this change the matching logic of oif in VRF scenarios?
>> If this change is intentional, how should the VRF configuration be adjusted to ensure that oif rules are matched first? Is it necessary to introduce a new mechanism?
>
>Can you try replacing the first two rules with:
>
>ip rule add from 11.47.3.130 l3mdev prio 0
>
This does not work in scenarios where the routing table specified by the oif/iif is not in the l3mdev-table.
>And see if it helps?
>
>It's not exactly equivalent to your two rules, but it says "if source
>address is 11.47.3.130 and flow is associated with a L3 master device,
>then direct the FIB lookup to the table associated with the L3 master
>device"
>
>The commit you referenced added the index of the L3 master device to the
>flow structure, but I don't believe we have an explicit way to match on
>it using FIB rules. It would be useful to add a new keyword (e.g.,
>l3mdevif) and then your rules can become:
>
>ip rule add from 11.47.3.130 l3mdevif vrf-srv-1 table 10 prio 0
>ip rule add from 11.47.3.130 table 20 prio 997
>
>But it requires kernel changes.
Before the patch is installed, oif/iif rules can be configured for traffic from the VRF and traffic can be forwarded normally.
However, in this patch, traffic from the VRF resets fl->flowi_oif in l3mdev_update_flow. As a result, the 'rule->oifindex != fl->flowi_oif'
condition in fib_rule_match cannot be met, the oif rule cannot be matched. The patch also mentions "oif set to L3mdev directs lookup to its table;
reset to avoid oif match in fib_lookup" in the modification, which seems to be intentional. I'm rather confused about this. Does the modification
ignore the scenario where the oif/iif rule is configured on the VRF, or is the usage of the oif/iif rule no longer supported by the community after
the patch is installed, or is the usage of the oif/iif rule incorrectly used?
Any reply would be greatly appreciated.
Thanks!
Powered by blists - more mailing lists