[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAMDZJNVN8SuumcwOZZsgGDP-_-BX9K4sGC7-sbC3jypstrMXpQ@mail.gmail.com>
Date: Thu, 2 Jan 2020 17:31:56 +0800
From: Tonghao Zhang <xiangxia.m.yue@...il.com>
To: Or Gerlitz <gerlitz.or@...il.com>
Cc: Saeed Mahameed <saeedm@....mellanox.co.il>,
Roi Dayan <roid@...lanox.com>,
Linux Kernel Network Developers <netdev@...r.kernel.org>
Subject: Re: mlx5e question about PF fwd packets to PF
On Thu, Jan 2, 2020 at 3:50 PM Or Gerlitz <gerlitz.or@...il.com> wrote:
>
> On Thu, Jan 2, 2020 at 5:04 AM Tonghao Zhang <xiangxia.m.yue@...il.com> wrote:
>>
>> On Wed, Jan 1, 2020 at 4:40 AM Or Gerlitz <gerlitz.or@...il.com> wrote:
>> > On Tue, Dec 31, 2019 at 10:39 AM Tonghao Zhang <xiangxia.m.yue@...il.com> wrote:
>
>
>>
>> >> In one case, we want forward the packets from one PF to otter PF in eswitchdev mode.
>
>
>>
>> > Did you want to say from one uplink to the other uplink? -- this is not supported.
>
>
>>
>> yes, I try to install one rule and hope that one uplink can forward
>> the packets to other uplink of PF.
>
>
>
> this is not supported
>
>
>>
>> But the rule can be installed successfully, and the counter of rule is
>> changed as show below:
>
>
>>
>> # tc filter add dev $PF0 protocol all parent ffff: prio 1 handle 1
>> flower action mirred egress redirect dev $PF1
>
>
> you didn't ask for skip_sw, if you install a rule with "none" and adding to hw
> fails, still the rule is fine in the SW data-path
>
>
>>
>> # tc -d -s filter show dev $PF0 ingress
>> filter protocol all pref 1 flower chain 0
>> filter protocol all pref 1 flower chain 0 handle 0x1
>> in_hw
>
>
> this (in_hw) seems to be a bug, we don't support it AFAIK
>
>> action order 1: mirred (Egress Redirect to device enp130s0f1) stolen
>> index 1 ref 1 bind 1 installed 19 sec used 0 sec
>> Action statistics:
>> Sent 3206840 bytes 32723 pkt (dropped 0, overlimits 0 requeues 0)
>> backlog 0b 0p requeues 0
>
>
> I think newish (for about a year now or maybe more) kernels and iproute have
> per data-path (SW/HW) rule traffic counters - this would help you realize what is
> going on down there
Hi, Or
Thanks for answering my question.
I add "skip_sw" option in tc command, and update the tc version to
upstream, it run successfully:
# tc filter add dev $PF0 protocol all parent ffff: prio 1 handle 1
flower skip_sw action mirred egress redirect dev $PF1
# tc -d -s filter show dev $PF0 ingress
filter protocol all pref 1 flower chain 0
filter protocol all pref 1 flower chain 0 handle 0x1
skip_sw
in_hw in_hw_count 1
action order 1: mirred (Egress Redirect to device enp130s0f1) stolen
index 1 ref 1 bind 1 installed 42 sec used 0 sec
Action statistics:
Sent 408954 bytes 4173 pkt (dropped 0, overlimits 0 requeues 0)
Sent software 0 bytes 0 pkt
Sent hardware 408954 bytes 4173 pkt
backlog 0b 0p requeues 0
>>
>> The PF1 uplink don't sent the packets out(as you say, we don't support it now).
>> If we don't support it, should we return -NOSUPPORT when we install
>> the hairpin rule between
>> uplink of PF, because it makes me confuse.
>
>
> indeed, but only if you use skip_sw
>
> still the in_hw indication suggests there a driver bug
>
>
>>
>> > What we do support is the following (I think you do it by now):
>> > PF0.uplink --> esw --> PF0.VFx --> hairpin --> PF1.VFy --> esw --> PF1.uplink
>
>
>>
>> Yes, I have tested it, and it work fine for us.
>
>
> cool, so production can keep using these rules..
>
>
>>
>> > Hence the claim here is that if PF0.uplink --> hairpin --> PF1.uplink
>> > would have been supported
>
>
>>
>> Did we have plan to support that function.
>
>
> I don't think so, what is the need? something wrong with N+2 rules as I suggested?
N+2 works fine. I do some research about ovs offload with mellanox nic.
I add the uplink of PF0 and PF1 to ovs. and it can offload the
rule(PF0 to PF1, I reproduce with tc commands) to hardware but the nic
can't send the packet out.
>
>>
>> > and the system had N steering rules, with what is currently supported you
>> > need N+2 rules -- N rules + one T2 rule and one T3 rul
Powered by blists - more mailing lists