[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6fdf2b6c-2c92-4b74-b746-6c68ed7cdf59@redhat.com>
Date: Thu, 22 Jan 2026 15:49:07 +0100
From: Paolo Abeni <pabeni@...hat.com>
To: Eric Dumazet <edumazet@...gle.com>, "David S . Miller"
<davem@...emloft.net>, Jakub Kicinski <kuba@...nel.org>
Cc: Matthieu Baerts <matttbe@...nel.org>, Mat Martineau
<martineau@...nel.org>, Geliang Tang <geliang.tang@...ux.dev>,
Florian Westphal <fw@...len.de>, netdev@...r.kernel.org,
eric.dumazet@...il.com,
syzbot+5498a510ff9de39d37da@...kaller.appspotmail.com,
Eulgyu Kim <eulgyukim@....ac.kr>, Geliang Tang <geliang@...nel.org>
Subject: Re: [PATCH net] mptcp: fix race in mptcp_pm_nl_flush_addrs_doit()
On 1/22/26 2:54 PM, Eric Dumazet wrote:
> On Thu, Jan 22, 2026 at 2:13 PM Eric Dumazet <edumazet@...gle.com> wrote:
>>
>> syzbot and Eulgyu Kim reported crashes in mptcp_pm_nl_get_local_id()
>> and/or mptcp_pm_nl_is_backup()
>>
>> Root cause is list_splice_init() in mptcp_pm_nl_flush_addrs_doit()
>> which is not RCU ready.
>>
>> list_splice_init_rcu() can not be called here while holding pernet->lock
>> spinlock.
>>
>> Many thanks to Eulgyu Kim for providing a repro and testing our patches.
>>
>> Fixes: 141694df6573 ("mptcp: remove address when netlink flushes addrs")
>> Signed-off-by: Eric Dumazet <edumazet@...gle.com>
>> Reported-by: syzbot+5498a510ff9de39d37da@...kaller.appspotmail.com
>> Closes: https://lore.kernel.org/all/6970a46d.a00a0220.3ad28e.5cf0.GAE@google.com/T/
>> Reported-by: Eulgyu Kim <eulgyukim@....ac.kr>
>> Cc: Geliang Tang <geliang@...nel.org>
>> ---
>> net/mptcp/pm_kernel.c | 14 +++++++++++---
>> 1 file changed, 11 insertions(+), 3 deletions(-)
>>
>> diff --git a/net/mptcp/pm_kernel.c b/net/mptcp/pm_kernel.c
>> index 57570a44e4185370f531047fe97ce9f9fbd1480b..1a97d0eafa2b0c9e4275b90d4a576f837dc286a9 100644
>> --- a/net/mptcp/pm_kernel.c
>> +++ b/net/mptcp/pm_kernel.c
>> @@ -1294,16 +1294,24 @@ static void __reset_counters(struct pm_nl_pernet *pernet)
>> int mptcp_pm_nl_flush_addrs_doit(struct sk_buff *skb, struct genl_info *info)
>> {
>> struct pm_nl_pernet *pernet = genl_info_pm_nl(info);
>> - LIST_HEAD(free_list);
>> + struct list_head free_list;
>>
>> spin_lock_bh(&pernet->lock);
>> - list_splice_init(&pernet->endp_list, &free_list);
>> +
>> + free_list = pernet->endp_list;
>> + INIT_LIST_HEAD_RCU(&pernet->endp_list);
>> +
>> __reset_counters(pernet);
>> pernet->next_id = 1;
>> bitmap_zero(pernet->id_bitmap, MPTCP_PM_MAX_ADDR_ID + 1);
>> spin_unlock_bh(&pernet->lock);
>> - mptcp_nl_flush_addrs_list(sock_net(skb->sk), &free_list);
>> synchronize_rcu();
>
>
>> +
>> + /* Adjust the pointers to free_list instead of pernet->endp_list */
>> + free_list.prev->next = &free_list;
>> + free_list.next->prev = &free_list;
>
>
> We have to test if the list was empty, and avoid the synchronize_rcu
> in this case.
>
> I will squash in V2, unless someone complains.
>
> diff --git a/net/mptcp/pm_kernel.c b/net/mptcp/pm_kernel.c
> index 1a97d0eafa2b0c9e4275b90d4a576f837dc286a9..af23be6658ded4860133bb9495c7738014815d28
> 100644
> --- a/net/mptcp/pm_kernel.c
> +++ b/net/mptcp/pm_kernel.c
> @@ -1305,6 +1305,10 @@ int mptcp_pm_nl_flush_addrs_doit(struct sk_buff
> *skb, struct genl_info *info)
> pernet->next_id = 1;
> bitmap_zero(pernet->id_bitmap, MPTCP_PM_MAX_ADDR_ID + 1);
> spin_unlock_bh(&pernet->lock);
> +
> + if (free_list.next == &pernet->endp_list)
> + return 0;
> +
> synchronize_rcu();
>
> /* Adjust the pointers to free_list instead of pernet->endp_list */
>
LGTM, thanks Eric!
Side note: I think is very busy elsewhere and this could go directly via
the net tree.
/P
Powered by blists - more mailing lists