[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <aUGP_kggEz_5-RAc@horms.kernel.org>
Date: Tue, 16 Dec 2025 16:59:42 +0000
From: Simon Horman <horms@...nel.org>
To: Minseong Kim <ii4gsp@...il.com>
Cc: netdev@...r.kernel.org, "David S . Miller" <davem@...emloft.net>,
Eric Dumazet <edumazet@...gle.com>,
Jakub Kicinski <kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>,
stable@...r.kernel.org
Subject: Re: [PATCH net v4] atm: mpoa: Fix UAF on qos_head list in procfs
On Tue, Dec 16, 2025 at 09:09:10PM +0900, Minseong Kim wrote:
> /proc/net/atm/mpc read-side iterates qos_head without synchronization,
> while write-side can delete and free entries concurrently, leading to
> use-after-free.
>
> Protect qos_head with a mutex and ensure procfs search+delete operations
> are serialized under the same lock.
>
> Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2")
> Cc: stable@...r.kernel.org
> Signed-off-by: Minseong Kim <ii4gsp@...il.com>
...
> @@ -1521,8 +1525,11 @@ static void __exit atm_mpoa_cleanup(void)
> mpc = tmp;
> }
>
> + mutex_lock(&qos_mutex);
> qos = qos_head;
> qos_head = NULL;
> + mutex_unlock(&qos_mutex);
I don't think this is necessary.
mpc_proc_clean() is called earlier in atm_mpoa_cleanup(). So I don't think
any accesses to the procfs callbacks can be occurring at this point. So
there is no need to guard against that.
Conversely the following call chain accesses, qos_head, and uses an entry
if found there. But there doesn't seem to be protection for concurrent
access (or removal) from procfs.
MPOA_res_reply_rcvd()->check_qos_and_open_shortcut->atm_mpoa_search_qos()
In this case I'm concerned that extending the current locking approach may
result in poor behaviour if procfs holds qos_mutex for an extended period.
And I think that there is also a concurrency issue with
access to qos_head in the following call chain.
mpc_show()->atm_mpoa_disp_qos()
...
--
pw-bot: changes-requested
Powered by blists - more mailing lists