[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <0dc8c829-23f0-4904-8017-fc98c079f0ab@redhat.com>
Date: Thu, 31 Oct 2024 11:13:22 +0100
From: Paolo Abeni <pabeni@...hat.com>
To: Omid Ehtemam-Haghighi <omid.ehtemamhaghighi@...losecurity.com>,
netdev@...r.kernel.org
Cc: adrian.oliver@...losecurity.com, Adrian Oliver <kernel@...iver.ca>,
"David S . Miller" <davem@...emloft.net>, David Ahern <dsahern@...il.com>,
Eric Dumazet <edumazet@...gle.com>, Jakub Kicinski <kuba@...nel.org>,
Shuah Khan <shuah@...nel.org>, Ido Schimmel <idosch@...sch.org>,
Kuniyuki Iwashima <kuniyu@...zon.com>, Simon Horman <horms@...nel.org>,
linux-kselftest@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH net v6] ipv6: Fix soft lockups in fib6_select_path under
high next hop churn
On 10/25/24 09:30, Omid Ehtemam-Haghighi wrote:
> Soft lockups have been observed on a cluster of Linux-based edge routers
> located in a highly dynamic environment. Using the `bird` service, these
> routers continuously update BGP-advertised routes due to frequently
> changing nexthop destinations, while also managing significant IPv6
> traffic. The lockups occur during the traversal of the multipath
> circular linked-list in the `fib6_select_path` function, particularly
> while iterating through the siblings in the list. The issue typically
> arises when the nodes of the linked list are unexpectedly deleted
> concurrently on a different core—indicated by their 'next' and
> 'previous' elements pointing back to the node itself and their reference
> count dropping to zero. This results in an infinite loop, leading to a
> soft lockup that triggers a system panic via the watchdog timer.
>
> Apply RCU primitives in the problematic code sections to resolve the
> issue. Where necessary, update the references to fib6_siblings to
> annotate or use the RCU APIs.
>
> Include a test script that reproduces the issue. The script
> periodically updates the routing table while generating a heavy load
> of outgoing IPv6 traffic through multiple iperf3 clients. It
> consistently induces infinite soft lockups within a couple of minutes.
>
> Kernel log:
>
> 0 [ffffbd13003e8d30] machine_kexec at ffffffff8ceaf3eb
> 1 [ffffbd13003e8d90] __crash_kexec at ffffffff8d0120e3
> 2 [ffffbd13003e8e58] panic at ffffffff8cef65d4
> 3 [ffffbd13003e8ed8] watchdog_timer_fn at ffffffff8d05cb03
> 4 [ffffbd13003e8f08] __hrtimer_run_queues at ffffffff8cfec62f
> 5 [ffffbd13003e8f70] hrtimer_interrupt at ffffffff8cfed756
> 6 [ffffbd13003e8fd0] __sysvec_apic_timer_interrupt at ffffffff8cea01af
> 7 [ffffbd13003e8ff0] sysvec_apic_timer_interrupt at ffffffff8df1b83d
> -- <IRQ stack> --
> 8 [ffffbd13003d3708] asm_sysvec_apic_timer_interrupt at ffffffff8e000ecb
> [exception RIP: fib6_select_path+299]
> RIP: ffffffff8ddafe7b RSP: ffffbd13003d37b8 RFLAGS: 00000287
> RAX: ffff975850b43600 RBX: ffff975850b40200 RCX: 0000000000000000
> RDX: 000000003fffffff RSI: 0000000051d383e4 RDI: ffff975850b43618
> RBP: ffffbd13003d3800 R8: 0000000000000000 R9: ffff975850b40200
> R10: 0000000000000000 R11: 0000000000000000 R12: ffffbd13003d3830
> R13: ffff975850b436a8 R14: ffff975850b43600 R15: 0000000000000007
> ORIG_RAX: ffffffffffffffff CS: 0010 SS: 0018
> 9 [ffffbd13003d3808] ip6_pol_route at ffffffff8ddb030c
> 10 [ffffbd13003d3888] ip6_pol_route_input at ffffffff8ddb068c
> 11 [ffffbd13003d3898] fib6_rule_lookup at ffffffff8ddf02b5
> 12 [ffffbd13003d3928] ip6_route_input at ffffffff8ddb0f47
> 13 [ffffbd13003d3a18] ip6_rcv_finish_core.constprop.0 at ffffffff8dd950d0
> 14 [ffffbd13003d3a30] ip6_list_rcv_finish.constprop.0 at ffffffff8dd96274
> 15 [ffffbd13003d3a98] ip6_sublist_rcv at ffffffff8dd96474
> 16 [ffffbd13003d3af8] ipv6_list_rcv at ffffffff8dd96615
> 17 [ffffbd13003d3b60] __netif_receive_skb_list_core at ffffffff8dc16fec
> 18 [ffffbd13003d3be0] netif_receive_skb_list_internal at ffffffff8dc176b3
> 19 [ffffbd13003d3c50] napi_gro_receive at ffffffff8dc565b9
> 20 [ffffbd13003d3c80] ice_receive_skb at ffffffffc087e4f5 [ice]
> 21 [ffffbd13003d3c90] ice_clean_rx_irq at ffffffffc0881b80 [ice]
> 22 [ffffbd13003d3d20] ice_napi_poll at ffffffffc088232f [ice]
> 23 [ffffbd13003d3d80] __napi_poll at ffffffff8dc18000
> 24 [ffffbd13003d3db8] net_rx_action at ffffffff8dc18581
> 25 [ffffbd13003d3e40] __do_softirq at ffffffff8df352e9
> 26 [ffffbd13003d3eb0] run_ksoftirqd at ffffffff8ceffe47
> 27 [ffffbd13003d3ec0] smpboot_thread_fn at ffffffff8cf36a30
> 28 [ffffbd13003d3ee8] kthread at ffffffff8cf2b39f
> 29 [ffffbd13003d3f28] ret_from_fork at ffffffff8ce5fa64
> 30 [ffffbd13003d3f50] ret_from_fork_asm at ffffffff8ce03cbb
>
> Fixes: 66f5d6ce53e6 ("ipv6: replace rwlock with rcu and spinlock in fib6_table")
> Reported-by: Adrian Oliver <kernel@...iver.ca>
> Signed-off-by: Omid Ehtemam-Haghighi <omid.ehtemamhaghighi@...losecurity.com>
> Cc: David S. Miller <davem@...emloft.net>
> Cc: David Ahern <dsahern@...il.com>
> Cc: Eric Dumazet <edumazet@...gle.com>
> Cc: Jakub Kicinski <kuba@...nel.org>
> Cc: Paolo Abeni <pabeni@...hat.com>
> Cc: Shuah Khan <shuah@...nel.org>
> Cc: Ido Schimmel <idosch@...sch.org>
> Cc: Kuniyuki Iwashima <kuniyu@...zon.com>
> Cc: Simon Horman <horms@...nel.org>
> Cc: netdev@...r.kernel.org
> Cc: linux-kselftest@...r.kernel.org
> Cc: linux-kernel@...r.kernel.org
Given the issue is long-standing, and the fix is somewhat invasive, I
suggest steering this patch on net-next.
Would that be ok for you?
Thanks,
Paolo
Powered by blists - more mailing lists