[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e8039b96-1765-4464-b534-d6d1385b46eb@kernel.org>
Date: Tue, 25 Feb 2025 18:52:45 +0100
From: Matthieu Baerts <matttbe@...nel.org>
To: Krister Johansen <kjlx@...pleofstupid.com>,
Mat Martineau <martineau@...nel.org>
Cc: Geliang Tang <geliang@...nel.org>, "David S. Miller"
<davem@...emloft.net>, Eric Dumazet <edumazet@...gle.com>,
Jakub Kicinski <kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>,
Simon Horman <horms@...nel.org>, netdev@...r.kernel.org,
mptcp@...ts.linux.dev
Subject: Re: [PATCH v2 mptcp] mptcp: fix 'scheduling while atomic' in
mptcp_pm_nl_append_new_local_addr
Hi Krister,
On 25/02/2025 00:20, Krister Johansen wrote:
> If multiple connection requests attempt to create an implicit mptcp
> endpoint in parallel, more than one caller may end up in
> mptcp_pm_nl_append_new_local_addr because none found the address in
> local_addr_list during their call to mptcp_pm_nl_get_local_id. In this
> case, the concurrent new_local_addr calls may delete the address entry
> created by the previous caller. These deletes use synchronize_rcu, but
> this is not permitted in some of the contexts where this function may be
> called. During packet recv, the caller may be in a rcu read critical
> section and have preemption disabled.
Thank you for this patch, and for having taken the time to analyse the
issue!
> An example stack:
>
> BUG: scheduling while atomic: swapper/2/0/0x00000302
>
> Call Trace:
> <IRQ>
> dump_stack_lvl+0x76/0xa0
> dump_stack+0x10/0x20
> __schedule_bug+0x64/0x80
> schedule_debug.constprop.0+0xdb/0x130
> __schedule+0x69/0x6a0
> schedule+0x33/0x110
> schedule_timeout+0x157/0x170
> wait_for_completion+0x88/0x150
> __wait_rcu_gp+0x150/0x160
> synchronize_rcu+0x12d/0x140
> mptcp_pm_nl_append_new_local_addr+0x1bd/0x280
> mptcp_pm_nl_get_local_id+0x121/0x160
> mptcp_pm_get_local_id+0x9d/0xe0
> subflow_check_req+0x1a8/0x460
> subflow_v4_route_req+0xb5/0x110
> tcp_conn_request+0x3a4/0xd00
> subflow_v4_conn_request+0x42/0xa0
> tcp_rcv_state_process+0x1e3/0x7e0
> tcp_v4_do_rcv+0xd3/0x2a0
> tcp_v4_rcv+0xbb8/0xbf0
> ip_protocol_deliver_rcu+0x3c/0x210
> ip_local_deliver_finish+0x77/0xa0
> ip_local_deliver+0x6e/0x120
> ip_sublist_rcv_finish+0x6f/0x80
> ip_sublist_rcv+0x178/0x230
> ip_list_rcv+0x102/0x140
> __netif_receive_skb_list_core+0x22d/0x250
> netif_receive_skb_list_internal+0x1a3/0x2d0
> napi_complete_done+0x74/0x1c0
> igb_poll+0x6c/0xe0 [igb]
> __napi_poll+0x30/0x200
> net_rx_action+0x181/0x2e0
> handle_softirqs+0xd8/0x340
> __irq_exit_rcu+0xd9/0x100
> irq_exit_rcu+0xe/0x20
> common_interrupt+0xa4/0xb0
> </IRQ>
Detail: if possible, next time, do not hesitate to resolve the
addresses, e.g. using: ./scripts/decode_stacktrace.sh
> This problem seems particularly prevalent if the user advertises an
> endpoint that has a different external vs internal address. In the case
> where the external address is advertised and multiple connections
> already exist, multiple subflow SYNs arrive in parallel which tends to
> trigger the race during creation of the first local_addr_list entries
> which have the internal address instead.
>
> Fix by skipping the replacement of an existing implicit local address if
> called via mptcp_pm_nl_get_local_id.
The v2 looks good to me:
Reviewed-by: Matthieu Baerts (NGI0) <matttbe@...nel.org>
I'm going to apply it in our MPTCP tree, but this patch can also be
directly applied in the net tree directly, not to delay it by one week
if preferred. If not, I can re-send it later on.
Cheers,
Matt
--
Sponsored by the NGI0 Core fund.
Powered by blists - more mailing lists