[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <9b0312c8-dc96-494e-86f9-69ee45369029@blackwall.org>
Date: Fri, 7 Mar 2025 10:33:57 +0200
From: Nikolay Aleksandrov <razor@...ckwall.org>
To: Hangbin Liu <liuhangbin@...il.com>
Cc: netdev@...r.kernel.org, Jay Vosburgh <jv@...sburgh.net>,
Andrew Lunn <andrew+netdev@...n.ch>, "David S. Miller"
<davem@...emloft.net>, Eric Dumazet <edumazet@...gle.com>,
Jakub Kicinski <kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>,
Simon Horman <horms@...nel.org>, Shuah Khan <shuah@...nel.org>,
Tariq Toukan <tariqt@...dia.com>, Jianbo Liu <jianbol@...dia.com>,
Jarod Wilson <jarod@...hat.com>,
Steffen Klassert <steffen.klassert@...unet.com>,
Cosmin Ratiu <cratiu@...dia.com>, Petr Machata <petrm@...dia.com>,
linux-kselftest@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCHv5 net 1/3] bonding: fix calling sleeping function in spin
lock and some race conditions
On 3/7/25 10:11, Hangbin Liu wrote:
> Hi Nikolay,
> On Fri, Mar 07, 2025 at 09:42:49AM +0200, Nikolay Aleksandrov wrote:
>> On 3/7/25 05:19, Hangbin Liu wrote:
>>> The fixed commit placed mutex_lock() inside spin_lock_bh(), which triggers
>>> a warning:
>>>
>>> BUG: sleeping function called from invalid context at...
>>>
>>> Fix this by moving the IPsec deletion operation to bond_ipsec_free_sa,
>>> which is not held by spin_lock_bh().
>>>
>>> Additionally, there are also some race conditions as bond_ipsec_del_sa_all()
>>> and __xfrm_state_delete could running in parallel without any lock.
>>> e.g.
>>>
>>> bond_ipsec_del_sa_all() __xfrm_state_delete()
>>> - .xdo_dev_state_delete - bond_ipsec_del_sa()
>>> - .xdo_dev_state_free - .xdo_dev_state_delete()
>>> - bond_ipsec_free_sa()
>>> bond active_slave changes - .xdo_dev_state_free()
>>>
>>> bond_ipsec_add_sa_all()
>>> - ipsec->xs->xso.real_dev = real_dev;
>>> - xdo_dev_state_add
>>>
>>> To fix this, let's add xs->lock during bond_ipsec_del_sa_all(), and delete
>>> the IPsec list when the XFRM state is DEAD, which could prevent
>>> xdo_dev_state_free() from being triggered again in bond_ipsec_free_sa().
>>>
>>> In bond_ipsec_add_sa(), if .xdo_dev_state_add() failed, the xso.real_dev
>>> is set without clean. Which will cause trouble if __xfrm_state_delete is
>>> called at the same time. Reset the xso.real_dev to NULL if state add failed.
>>>
>>> Despite the above fixes, there are still races in bond_ipsec_add_sa()
>>> and bond_ipsec_add_sa_all(). If __xfrm_state_delete() is called immediately
>>> after we set the xso.real_dev and before .xdo_dev_state_add() is finished,
>>> like
>>>
>>> ipsec->xs->xso.real_dev = real_dev;
>>> __xfrm_state_delete
>>> - bond_ipsec_del_sa()
>>> - .xdo_dev_state_delete()
>>> - bond_ipsec_free_sa()
>>> - .xdo_dev_state_free()
>>> .xdo_dev_state_add()
>>>
>>> But there is no good solution yet. So I just added a FIXME note in here
>>> and hope we can fix it in future.
>>>
>>> Fixes: 2aeeef906d5a ("bonding: change ipsec_lock from spin lock to mutex")
>>> Reported-by: Jakub Kicinski <kuba@...nel.org>
>>> Closes: https://lore.kernel.org/netdev/20241212062734.182a0164@kernel.org
>>> Suggested-by: Cosmin Ratiu <cratiu@...dia.com>
>>> Signed-off-by: Hangbin Liu <liuhangbin@...il.com>
>>> ---
>>> drivers/net/bonding/bond_main.c | 69 ++++++++++++++++++++++++---------
>>> 1 file changed, 51 insertions(+), 18 deletions(-)
>>>
>>> diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
>>> index e45bba240cbc..dd3d0d41d98f 100644
>>> --- a/drivers/net/bonding/bond_main.c
>>> +++ b/drivers/net/bonding/bond_main.c
>>> @@ -506,6 +506,7 @@ static int bond_ipsec_add_sa(struct xfrm_state *xs,
>>> list_add(&ipsec->list, &bond->ipsec_list);
>>> mutex_unlock(&bond->ipsec_lock);
>>> } else {
>>> + xs->xso.real_dev = NULL;
>>> kfree(ipsec);
>>> }
>>> out:
>>> @@ -541,7 +542,15 @@ static void bond_ipsec_add_sa_all(struct bonding *bond)
>>> if (ipsec->xs->xso.real_dev == real_dev)
>>> continue;
>>>
>>> + /* Skip dead xfrm states, they'll be freed later. */
>>> + if (ipsec->xs->km.state == XFRM_STATE_DEAD)
>>> + continue;
>>
>> As we commented earlier, reading this state without x->lock is wrong.
>
> But even we add the lock, like
>
> spin_lock_bh(&ipsec->xs->lock);
> if (ipsec->xs->km.state == XFRM_STATE_DEAD) {
> spin_unlock_bh(&ipsec->xs->lock);
> continue;
> }
>
> We still may got the race condition. Like the following note said.
> So I just leave it as the current status. But I can add the spin lock
> if you insist.
>
I don't insist at all, I just pointed out that this is buggy and the value doesn't
make sense used like that. Adding more bugs to the existing code wouldn't make it better.
>>> +
>>> ipsec->xs->xso.real_dev = real_dev;
>>> + /* FIXME: there is a race that before .xdo_dev_state_add()
>>> + * is called, the __xfrm_state_delete() is called in parallel,
>>> + * which will call .xdo_dev_state_delete() and xdo_dev_state_free()
>>> + */
>>> if (real_dev->xfrmdev_ops->xdo_dev_state_add(ipsec->xs, NULL)) {
>>> slave_warn(bond_dev, real_dev, "%s: failed to add SA\n", __func__);
>>> ipsec->xs->xso.real_dev = NULL;
>> [snip]
>>
>> TBH, keeping buggy code with a comment doesn't sound good to me. I'd rather remove this
>> support than tell people "good luck, it might crash". It's better to be safe until a
>> correct design is in place which takes care of these issues.
>
> I agree it's not a good experience to let users using an unstable feature.
> But this is a race condition, although we don't have a good fix yet.
>
> On the other hand, I think we can't remove a feature people is using, can we?
> What I can do is try fix the issues as my best.
>
I do appreciate the hard work you've been doing on this, don't get me wrong, but this is
not really uapi, it's an optimization. The path will become slower as it won't be offloaded,
but it will still work and will be stable until a proper fix or new design comes in.
Are you suggesting to knowingly leave a race condition that might lead to a number
of problems in place with a comment?
IMO that is not ok, but ultimately it's up to the maintainers to decide if they can live
with it. :)
> By the way, I started this patch because my patch 2/3 is blocked by the
> selftest results from patch 3/3...
>
> Thanks
> Hangbin
Powered by blists - more mailing lists