lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Zoaq358lbnpyHPUq@katalix.com>
Date: Thu, 4 Jul 2024 14:59:59 +0100
From: Tom Parkin <tparkin@...alix.com>
To: Hillf Danton <hdanton@...a.com>
Cc: syzbot <syzbot+c041b4ce3a6dfd1e63e2@...kaller.appspotmail.com>,
	linux-kernel@...r.kernel.org, netdev@...r.kernel.org,
	James Chapman <jchapman@...alix.com>,
	syzkaller-bugs@...glegroups.com
Subject: Re: [syzbot] [net?] KASAN: slab-use-after-free Write in
 l2tp_session_delete

On  Thu, Jul 04, 2024 at 19:23:03 +0800, Hillf Danton wrote:
> On Wed, 3 Jul 2024 13:39:25 +0100 Tom Parkin <tparkin@...alix.com>
> > 
> > The specific UAF that syzbot hits is due to the list node that the
> > list_for_each_safe temporary variable points to being modified while
> > the list_for_each_safe walk is in process.
> > 
> > This is possible due to l2tp_tunnel_closeall dropping the spin lock
> > that protects the list mid way through the list_for_each_safe loop.
> > This opens the door for another thread to call list_del_init on the
> > node that list_for_each_safe is planning to process next, causing
> > l2tp_tunnel_closeall to repeatedly iterate on that node forever.
> > 
> Yeah the next node could race with other thread because of the door.

Exactly, yes.

> > In the context of l2tp_ppp, this eventually leads to UAF because the
> > session structure itself is freed when the pppol2tp socket is
> > destroyed and the pppol2tp sk_destruct handler unrefs the session
> > structure to zero.
> > 
> > So to avoid the UAF, the list can safely be processed using a loop
> > which accesses the first entry in the tunnel session list under
> > spin lock protection, removing that entry, then dropping the lock
> > to call l2tp_session_delete.
> 
> Race exists after your patch.
> 
> 	cpu1				cpu2
> 	---				---
> 					pppol2tp_release()
> 
> 	spin_lock_bh(&tunnel->list_lock);
> 	while (!list_empty(&tunnel->session_list)) {
> 		session = list_first_entry(&tunnel->session_list,
> 					struct l2tp_session, list);
>  		list_del_init(&session->list);
>  		spin_unlock_bh(&tunnel->list_lock);
> 
>  					l2tp_session_delete(session);
> 
>  		l2tp_session_delete(session);
>  		spin_lock_bh(&tunnel->list_lock);
>  	}
>  	spin_unlock_bh(&tunnel->list_lock);

I take your point.  Calling l2tp_session_delete() on the same session
twice isn't a problem per-se, but if cpu2 manages to destruct the
socket and unref the session to zero before cpu1 progresses then we
have the same sort of problem.

To be honest, cleanup generally could use some TLC (the dancing around
the list lock in _closeall is not ideal), but a minimal patch to
address the UAF makes sense in the short term IMO -- so to that end
holding a session reference around l2tp_session_delete in _closeall is
probably the least invasive thing to do.

Download attachment "signature.asc" of type "application/pgp-signature" (489 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ