lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <da3f4f4e-47a7-25be-fa61-aebeba1d8d0c@alu.unizg.hr>
Date: Sun, 30 Jul 2023 18:48:04 +0200
From: Mirsad Todorovac <mirsad.todorovac@....unizg.hr>
To: Ido Schimmel <idosch@...sch.org>, petrm@...dia.com, razor@...ckwall.org
Cc: Ido Schimmel <idosch@...dia.com>, netdev@...r.kernel.org,
 linux-kselftest@...r.kernel.org, linux-kernel@...r.kernel.org,
 "David S. Miller" <davem@...emloft.net>, Eric Dumazet <edumazet@...gle.com>,
 Jakub Kicinski <kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>,
 Shuah Khan <shuah@...nel.org>
Subject: Re: [PATCH v1 01/11] selftests: forwarding: custom_multipath_hash.sh:
 add cleanup for SIGTERM sent by timeout

On 7/30/23 09:53, Ido Schimmel wrote:
> On Thu, Jul 27, 2023 at 09:26:03PM +0200, Mirsad Todorovac wrote:
>> marvin@...iant:~/linux/kernel/linux_torvalds$ grep "not ok" ../kselftest-6.5-rc3-net-forwarding-16.log
>> not ok 3 selftests: net/forwarding: bridge_mdb.sh # exit=1
> 
> Other than one test case (see below), I believe this should be fixed by
> the patches I just pushed to the existing branch. My earlier fix was
> incomplete which is why it didn't solve the problem.
> 
>> not ok 5 selftests: net/forwarding: bridge_mdb_max.sh # exit=1
> 
> Should be fixed with the patches.

Congratulations! Indeed, it looks a lot better:

marvin@...iant:~/linux/kernel/linux_torvalds$ grep "not ok" ../kselftest-6.5-rc3-net-forwarding-18.log
not ok 3 selftests: net/forwarding: bridge_mdb.sh # exit=1
not ok 11 selftests: net/forwarding: bridge_vlan_mcast.sh # exit=1
not ok 26 selftests: net/forwarding: ip6_forward_instats_vrf.sh # exit=1
not ok 49 selftests: net/forwarding: mirror_gre_changes.sh # exit=1
marvin@...iant:~/linux/kernel/linux_torvalds$ grep -v '^# +' ../kselftest-6.5-rc3-net-forwarding-18.log | grep -A1 -e '\[FAIL\]' | grep -v -e -- | grep -v OK
# TEST: IPv4 (S, G) port group entries configuration tests            [FAIL]
# 	Entry has an unpending group timer after replace
# TEST: IPv6 (S, G) port group entries configuration tests            [FAIL]
# 	Entry has an unpending group timer after replace
# TEST: Vlan mcast_startup_query_interval global option default value   [FAIL]
# 	Wrong default mcast_startup_query_interval global vlan option value
# TEST: Ip6InHdrErrors                                                [FAIL]
# TEST: mirror to gretap: TTL change (skip_hw)                        [FAIL]
# 	Expected to capture 10 packets, got 15.
# TEST: mirror to ip6gretap: TTL change (skip_hw)                     [FAIL]
# 	Expected to capture 10 packets, got 13.
marvin@...iant:~/linux/kernel/linux_torvalds$

>> not ok 11 selftests: net/forwarding: bridge_vlan_mcast.sh # exit=1
> 
> Nik, the relevant failure is this one:
> 
> # TEST: Vlan mcast_startup_query_interval global option default value   [FAIL]
> # 	Wrong default mcast_startup_query_interval global vlan option value
> 
> Any idea why the kernel will report "mcast_startup_query_interval" as
> 3124 instead of 3125?
> 
> # + jq -e '.[].vlans[] | select(.vlan == 10 and                                             .mcast_startup_query_interval == 3125) '
> # + echo -n '[{"ifname":"br0","vlans":[{"vlan":1,"mcast_snooping":1,"mcast_querier":0,"mcast_igmp_version":2,"mcast_mld_version":1,"mcast_last_member_count":2,"mcast_last_member_interval":100,"mcast_startup_query_count":2,"mcast_startup_query_interval":3124,"mcast_membership_interval":26000,"mcast_querier_interval":25500,"mcast_query_interval":12500,"mcast_query_response_interval":1000},{"vlan":10,"vlanEnd":11,"mcast_snooping":1,"mcast_querier":0,"mcast_igmp_version":2,"mcast_mld_version":1,"mcast_last_member_count":2,"mcast_last_member_interval":100,"mcast_startup_query_count":2,"mcast_startup_query_interval":3124,"mcast_membership_interval":26000,"mcast_querier_interval":25500,"mcast_query_interval":12500,"mcast_query_response_interval":1000}]}]'
> # + check_err 4 'Wrong default mcast_startup_query_interval global vlan option value'
> 
>> not ok 26 selftests: net/forwarding: ip6_forward_instats_vrf.sh # exit=1
> 
> Please run this one with +x so that we will get more info.

In fact, I have turned it on on all the remaining failing tests.

In case you want to investigate further, please find the debug output log
at the usual place:

https://domac.alu.unizg.hr/~mtodorov/linux/selftests/net-forwarding/kselftest-6.5-rc3-net-forwarding-18.log.xz

https://domac.alu.unizg.hr/~mtodorov/linux/selftests/net-forwarding/bridge_mdb.sh.out.xz
https://domac.alu.unizg.hr/~mtodorov/linux/selftests/net-forwarding/bridge_vlan_mcast.sh.out.xz
https://domac.alu.unizg.hr/~mtodorov/linux/selftests/net-forwarding/ip6_forward_instats_vrf.sh.out.xz
https://domac.alu.unizg.hr/~mtodorov/linux/selftests/net-forwarding/mirror_gre_changes.sh.out.xz

I hope this helps, because you drastically reduced the number of [FAIL] results.

If it matters being heard from me, I think it's a great job!

Kind regards,
Mirsad

>> not ok 49 selftests: net/forwarding: mirror_gre_changes.sh # exit=1
> 
> Petr, please take a look. Probably need to make the filters more
> specific. The failure is:
> 
> # TEST: mirror to gretap: TTL change (skip_hw)                        [FAIL]
> # 	Expected to capture 10 packets, got 14.
> 
>> not ok 84 selftests: net/forwarding: tc_flower_l2_miss.sh # exit=1
> 
> Should be fixed with the patches.
> 
>> marvin@...iant:~/linux/kernel/linux_torvalds$ grep -v "^# +" ../kselftest-6.5-rc3-net-forwarding-16.log | grep -A1 FAIL | grep -v -e -- | grep -v OK
>> # TEST: IPv6 (S, G) port group entries configuration tests            [FAIL]
>> # 	"temp" entry has an unpending group timer
> 
> Not sure about this one. What is the output with the following diff?
> 
> diff --git a/tools/testing/selftests/net/forwarding/bridge_mdb.sh b/tools/testing/selftests/net/forwarding/bridge_mdb.sh
> index 8493c3dfc01e..2b2a3b150861 100755
> --- a/tools/testing/selftests/net/forwarding/bridge_mdb.sh
> +++ b/tools/testing/selftests/net/forwarding/bridge_mdb.sh
> @@ -628,6 +628,7 @@ __cfg_test_port_ip_sg()
>          bridge -d -s mdb show dev br0 vid 10 | grep "$grp_key" | \
>                  grep -q "0.00"
>          check_fail $? "\"temp\" entry has an unpending group timer"
> +       bridge -d -s mdb show dev br0 vid 10 | grep "$grp_key"
>          bridge mdb del dev br0 port $swp1 $grp_key vid 10
>   
>          # Check error cases.
> 
>> # TEST: IPv4 host entries forwarding tests                            [FAIL]
>> # 	Packet not locally received after adding a host entry
>> # TEST: IPv4 port group "exclude" entries forwarding tests            [FAIL]
>> # 	Packet from valid source not received on H2 after adding entry
>> # TEST: IPv4 port group "include" entries forwarding tests            [FAIL]
>> # 	Packet from valid source not received on H2 after adding entry
>> # TEST: IGMPv3 MODE_IS_INCLUDE tests                                  [FAIL]
>> # 	Source not add to source list
>> # TEST: ctl4: port: ngroups reporting                                 [FAIL]
>> # 	Couldn't add MDB entries
>> # TEST: ctl4: port maxgroups: reporting and treatment of 0            [FAIL]
>> # 	Adding 5 MDB entries failed but should have passed
>> # TEST: ctl4: port maxgroups: configure below ngroups                 [FAIL]
>> # 	dev veth1: Couldn't add MDB entries
>> # TEST: ctl4: port: ngroups reporting                                 [FAIL]
>> # 	Couldn't add MDB entries
>> # TEST: ctl4: port maxgroups: reporting and treatment of 0            [FAIL]
>> # 	Adding 5 MDB entries failed but should have passed
>> # TEST: ctl4: port maxgroups: configure below ngroups                 [FAIL]
>> # 	dev veth1 vid 10: Couldn't add MDB entries
>> # TEST: ctl4: port_vlan: ngroups reporting                            [FAIL]
>> # 	Couldn't add MDB entries
>> # TEST: ctl4: port_vlan: isolation of port and per-VLAN ngroups       [FAIL]
>> # 	Couldn't add MDB entries to VLAN 10
>> # TEST: ctl4: port_vlan maxgroups: reporting and treatment of 0       [FAIL]
>> # 	Adding 5 MDB entries failed but should have passed
>> # TEST: ctl4: port_vlan maxgroups: configure below ngroups            [FAIL]
>> # 	dev veth1 vid 10: Couldn't add MDB entries
>> # TEST: ctl4: port_vlan maxgroups: isolation of port and per-VLAN ngroups   [FAIL]
>> # 	Couldn't add 5 entries
>> # TEST: Vlan mcast_startup_query_interval global option default value   [FAIL]
>> # 	Wrong default mcast_startup_query_interval global vlan option value
>> # TEST: Ip6InHdrErrors                                                [FAIL]
>> # TEST: mirror to gretap: TTL change (skip_hw)                        [FAIL]
>> # 	Expected to capture 10 packets, got 14.
>> # TEST: L2 miss - Multicast (IPv4)                                    [FAIL]
>> # 	Unregistered multicast filter was not hit before adding MDB entry
>> marvin@...iant:~/linux/kernel/linux_torvalds$
>>
>> In case you want to pursue these failures, there is the complete test output log
>> here:
>>
>> https://domac.alu.unizg.hr/~mtodorov/linux/selftests/net-forwarding/kselftest-6.5-rc3-net-forwarding-16.log.xz
>>
>> Thanks again, great work!
>>
>> Kind regards,
>> Mirsad

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ