[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20251022090038.GA88368@j38d01266.eu95sqa>
Date: Wed, 22 Oct 2025 17:00:38 +0800
From: Peng Wang <peng_wang@...ux.alibaba.com>
To: Vincent Guittot <vincent.guittot@...aro.org>
Cc: Peter Zijlstra <peterz@...radead.org>, mingo@...hat.com,
juri.lelli@...hat.com, dietmar.eggemann@....com,
rostedt@...dmis.org, bsegall@...gle.com, mgorman@...e.de,
vschneid@...hat.com, vdavydov.dev@...il.com,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] sched/fair: Clear ->h_load_next after hierarchical load
On Wed, Oct 15, 2025 at 03:14:37PM +0200, Vincent Guittot wrote:
> On Wed, 15 Oct 2025 at 14:44, Peter Zijlstra <peterz@...radead.org> wrote:
> >
> > On Wed, Oct 15, 2025 at 08:19:50PM +0800, Peng Wang wrote:
> >
> > > We found that the task_group corresponding to the problematic se
> > > is not in the parent task_group’s children list, indicating that
> > > h_load_next points to an invalid address. Consider the following
> > > cgroup and task hierarchy:
> > >
> > > A
> > > / \
> > > / \
> > > B E
> > > / \ |
> > > / \ t2
> > > C D
> > > | |
> > > t0 t1
> > >
> > > Here follows a timing sequence that may be responsible for triggering
> > > the problem:
> > >
> > > CPU X CPU Y CPU Z
> > > wakeup t0
> > > set list A->B->C
> > > traverse A->B->C
> > > t0 exits
> > > destroy C
> > > wakeup t2
> > > set list A->E wakeup t1
> > > set list A->B->D
> > > traverse A->B->C
> > > panic
> > >
> > > CPU Z sets ->h_load_next list to A->B->D, but due to arm64 weaker memory
> > > ordering, Y may observe A->B before it sees B->D, then in this time window,
> > > it can traverse A->B->C and reach an invalid se.
> >
> > Hmm, I rather think we should ensure update_cfs_rq_h_load() is
> > serialized against unregister_fair_sched_group().
>
> The bug has been reported for v5.10 which probably don't have fixed
> done "recently"
> commit b027789e5e50 ("sched/fair: Prevent dead task groups from
> regaining cfs_rq's")
Hi, Vincent and Peter,
We have already integrated this commit, but the bug persists.
Do you think we should explicitly clear the h_load_next list?
Even though update_cfs_rq_h_load runs under an RCU lock, ARM's
weak memory ordering could still allow readers to observe stale
values in the list.
>
> >
> > And I'm thinking that really shouldn't be hard; note how
> > sched_unregister_group() already has an RCU grace period. So all we need
> > to ensure is that task_h_load() is called in a context that stops RCU
> > grace periods (rcu_read_lock(), preempt_disable(), local_irq_disable(),
> > local_bh_disable()).
> >
> > A very quick scan makes me think at the very least the usage in
> >
> > task_numa_migrate()
> > task_numa_find_cpu()
> > task_h_load()
> >
> > fails here; probably more.
Powered by blists - more mailing lists