[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <cbf4d966-172e-4cc3-b7be-d2b59ad31675@linux.dev>
Date: Wed, 19 Feb 2025 17:26:21 +0800
From: Chengming Zhou <chengming.zhou@...ux.dev>
To: Josh Don <joshdon@...gle.com>, Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>, Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>
Cc: Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>, Ben Segall <bsegall@...gle.com>,
Mel Gorman <mgorman@...e.de>, Daniel Bristot de Oliveira
<bristot@...hat.com>, Valentin Schneider <vschneid@...hat.com>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] sched: fix potential use-after-free with cfs bandwidth
On 2025/2/11 03:51, Josh Don wrote:
> We remove the cfs_rq throttled_csd_list entry *before* doing the
> unthrottle. The problem with that is that destroy_bandwidth() does a
> lockless scan of the system for any non-empty CSD lists. As a result,
> it is possible that destroy_bandwidth() returns while we still have a
> cfs_rq from the task group about to be unthrottled.
>
> For full correctness, we should avoid removal from the list until after
> we're done unthrottling in __cfsb_csd_unthrottle().
>
> For consistency, we make the same change to distribute_cfs_runtime(),
> even though this should already be safe due to destroy_bandwidth()
> cancelling the bandwidth hrtimers.
>
> Signed-off-by: Josh Don <joshdon@...gle.com>
Good catch!
Reviewed-by: Chengming Zhou <chengming.zhou@...ux.dev>
BTW, I just drew the cfs_rq UAF as below:
CPU0 CPU1
__cfsb_csd_unthrottle()
rq lock
for each cfs_rq on list
list_del_init from list
unregister_fair_sched_group()
destroy_cfs_bandwidth()
if (list_empty(&rq->cfsb_csd_list))
continue; // skip rq0
if (cfs_rq->on_list) // maybe false
unthrottle_cfs_rq()
add cfs_rq to list
rq unlock
cfs_rq freed after RCU grace period
cfs_rq UAF!
Thanks!
> ---
> kernel/sched/fair.c | 8 ++++----
> 1 file changed, 4 insertions(+), 4 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 34fe6e9490c2..78f542ab03cf 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -5917,10 +5917,10 @@ static void __cfsb_csd_unthrottle(void *arg)
>
> list_for_each_entry_safe(cursor, tmp, &rq->cfsb_csd_list,
> throttled_csd_list) {
> - list_del_init(&cursor->throttled_csd_list);
> -
> if (cfs_rq_throttled(cursor))
> unthrottle_cfs_rq(cursor);
> +
> + list_del_init(&cursor->throttled_csd_list);
> }
>
> rcu_read_unlock();
> @@ -6034,11 +6034,11 @@ static bool distribute_cfs_runtime(struct cfs_bandwidth *cfs_b)
>
> rq_lock_irqsave(rq, &rf);
>
> - list_del_init(&cfs_rq->throttled_csd_list);
> -
> if (cfs_rq_throttled(cfs_rq))
> unthrottle_cfs_rq(cfs_rq);
>
> + list_del_init(&cfs_rq->throttled_csd_list);
> +
> rq_unlock_irqrestore(rq, &rf);
> }
> SCHED_WARN_ON(!list_empty(&local_unthrottle));
Powered by blists - more mailing lists