lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Z73CBzgTVucuOMMb@fedora>
Date: Tue, 25 Feb 2025 13:13:43 +0000
From: Hangbin Liu <liuhangbin@...il.com>
To: Nikolay Aleksandrov <razor@...ckwall.org>
Cc: netdev@...r.kernel.org, Jay Vosburgh <jv@...sburgh.net>,
	Andrew Lunn <andrew+netdev@...n.ch>,
	"David S. Miller" <davem@...emloft.net>,
	Eric Dumazet <edumazet@...gle.com>,
	Jakub Kicinski <kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>,
	Simon Horman <horms@...nel.org>, Shuah Khan <shuah@...nel.org>,
	Tariq Toukan <tariqt@...dia.com>, Jianbo Liu <jianbol@...dia.com>,
	Jarod Wilson <jarod@...hat.com>,
	Steffen Klassert <steffen.klassert@...unet.com>,
	Cosmin Ratiu <cratiu@...dia.com>, linux-kselftest@...r.kernel.org,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCHv2 net 1/3] bonding: move mutex lock to a work queue for
 XFRM GC tasks

On Tue, Feb 25, 2025 at 01:05:24PM +0200, Nikolay Aleksandrov wrote:
> > @@ -592,15 +611,17 @@ static void bond_ipsec_del_sa(struct xfrm_state *xs)
> >  	real_dev->xfrmdev_ops->xdo_dev_state_delete(xs);
> >  out:
> >  	netdev_put(real_dev, &tracker);
> > -	mutex_lock(&bond->ipsec_lock);
> > -	list_for_each_entry(ipsec, &bond->ipsec_list, list) {
> > -		if (ipsec->xs == xs) {
> > -			list_del(&ipsec->list);
> > -			kfree(ipsec);
> > -			break;
> > -		}
> > -	}
> > -	mutex_unlock(&bond->ipsec_lock);
> > +
> > +	xfrm_work = kmalloc(sizeof(*xfrm_work), GFP_ATOMIC);
> > +	if (!xfrm_work)
> > +		return;
> > +
> 
> What happens if this allocation fails? I think you'll leak memory and
> potentially call the xdo_dev callbacks for this xs again because it's
> still in the list. Also this xfrm_work memory doesn't get freed anywhere, so
> you're leaking it as well.

Yes, I thought this too simply and forgot free the memory.
> 
> Perhaps you can do this allocation in add_sa, it seems you can sleep
> there and potentially return an error if it fails, so this can never
> fail later. You'll have to be careful with the freeing dance though.

Hmm, if we allocation this in add_sa, how to we get the xfrm_work
in del_sa? Add the xfrm_work to another list will need to sleep again
to find it out in del_sa.

> Alternatively, make the work a part of struct bond so it doesn't need
> memory management, but then you need a mechanism to queue these items (e.g.
> a separate list with a spinlock) and would have more complexity with freeing
> in parallel.

I used a dealy work queue in bond for my draft patch. As you said,
it need another list to queue the xs. And during the gc works, we need
to use spinlock again to get the xs out...

> 
> > +	INIT_WORK(&xfrm_work->work, bond_xfrm_state_gc_work);
> > +	xfrm_work->bond = bond;
> > +	xfrm_work->xs = xs;
> > +	xfrm_state_hold(xs);
> > +
> > +	queue_work(bond->wq, &xfrm_work->work);
> 
> Note that nothing waits for this work anywhere and .ndo_uninit runs before
> bond's .priv_destructor which means ipsec_lock will be destroyed and will be
> used afterwards when destroying bond->wq from the destructor if there were
> any queued works.

Do you mean we need to register the work queue in bond_init and cancel
it in bond_work_cancel_all()?

Thanks
Hangbin

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ