[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20240704112303.3092-1-hdanton@sina.com>
Date: Thu, 4 Jul 2024 19:23:03 +0800
From: Hillf Danton <hdanton@...a.com>
To: Tom Parkin <tparkin@...alix.com>
Cc: syzbot <syzbot+c041b4ce3a6dfd1e63e2@...kaller.appspotmail.com>,
linux-kernel@...r.kernel.org,
netdev@...r.kernel.org,
James Chapman <jchapman@...alix.com>,
syzkaller-bugs@...glegroups.com
Subject: Re: [syzbot] [net?] KASAN: slab-use-after-free Write in l2tp_session_delete
On Wed, 3 Jul 2024 13:39:25 +0100 Tom Parkin <tparkin@...alix.com>
>
> The specific UAF that syzbot hits is due to the list node that the
> list_for_each_safe temporary variable points to being modified while
> the list_for_each_safe walk is in process.
>
> This is possible due to l2tp_tunnel_closeall dropping the spin lock
> that protects the list mid way through the list_for_each_safe loop.
> This opens the door for another thread to call list_del_init on the
> node that list_for_each_safe is planning to process next, causing
> l2tp_tunnel_closeall to repeatedly iterate on that node forever.
>
Yeah the next node could race with other thread because of the door.
> In the context of l2tp_ppp, this eventually leads to UAF because the
> session structure itself is freed when the pppol2tp socket is
> destroyed and the pppol2tp sk_destruct handler unrefs the session
> structure to zero.
>
> So to avoid the UAF, the list can safely be processed using a loop
> which accesses the first entry in the tunnel session list under
> spin lock protection, removing that entry, then dropping the lock
> to call l2tp_session_delete.
Race exists after your patch.
cpu1 cpu2
--- ---
pppol2tp_release()
spin_lock_bh(&tunnel->list_lock);
while (!list_empty(&tunnel->session_list)) {
session = list_first_entry(&tunnel->session_list,
struct l2tp_session, list);
list_del_init(&session->list);
spin_unlock_bh(&tunnel->list_lock);
l2tp_session_delete(session);
l2tp_session_delete(session);
spin_lock_bh(&tunnel->list_lock);
}
spin_unlock_bh(&tunnel->list_lock);
Powered by blists - more mailing lists