[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aSO4rMm59Z68n6EI@fedora>
Date: Mon, 24 Nov 2025 09:45:16 +0800
From: Pingfan Liu <piliu@...hat.com>
To: Juri Lelli <juri.lelli@...hat.com>
Cc: linux-kernel@...r.kernel.org, Waiman Long <longman@...hat.com>,
Chen Ridong <chenridong@...weicloud.com>,
Peter Zijlstra <peterz@...radead.org>,
Pierre Gondois <pierre.gondois@....com>,
Ingo Molnar <mingo@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>,
Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
Valentin Schneider <vschneid@...hat.com>, Tejun Heo <tj@...nel.org>,
Johannes Weiner <hannes@...xchg.org>, mkoutny@...e.com
Subject: Re: [PATCHv7 2/2] sched/deadline: Walk up cpuset hierarchy to decide
root domain when hot-unplug
On Fri, Nov 21, 2025 at 02:05:31PM +0100, Juri Lelli wrote:
> Hi!
>
> On 19/11/25 17:55, Pingfan Liu wrote:
>
> ...
>
> > +/* Access rule: must be called on local CPU with preemption disabled */
> > static DEFINE_PER_CPU(cpumask_var_t, local_cpu_mask_dl);
>
> ...
>
> > +/* The caller should hold cpuset_mutex */
>
> Maybe we can add a lockdep explicit check?
>
Currently, all cpuset locks are encapsulated in
kernel/cgroup/cpuset-internal.h. I'm not sure if it's appropriate to
expose them. If exposing them is acceptable,
cpuset_callback_lock_irq()/cpuset_callback_unlock_irq() would be
preferable to cpuset_mutex assertion.
@Waiman, @Ridong, could you kindly share your opinion?
> > void dl_add_task_root_domain(struct task_struct *p)
> > {
> > struct rq_flags rf;
> > struct rq *rq;
> > struct dl_bw *dl_b;
> > + unsigned int cpu;
> > + struct cpumask *msk = this_cpu_cpumask_var_ptr(local_cpu_mask_dl);
>
> Can this corrupt local_cpu_mask_dl?
>
> Without preemption being disabled, the following race can occur:
>
> 1. Thread calls dl_add_task_root_domain() on CPU 0
> 2. Gets pointer to CPU 0's local_cpu_mask_dl
> 3. Thread is preempted and migrated to CPU 1
> 4. Thread continues using CPU 0's local_cpu_mask_dl
> 5. Meanwhile, the scheduler on CPU 0 calls find_later_rq() which also
> uses local_cpu_mask_dl (with preemption properly disabled)
> 6. Both contexts now corrupt the same per-CPU buffer concurrently
>
Oh, that is definitely an issue. Thanks for pointing it out.
> >
> > raw_spin_lock_irqsave(&p->pi_lock, rf.flags);
>
> It's safe to get the pointer after this point.
>
Yes.
> > if (!dl_task(p) || dl_entity_is_special(&p->dl)) {
> > @@ -2919,16 +2952,25 @@ void dl_add_task_root_domain(struct task_struct *p)
> > return;
> > }
> >
> > - rq = __task_rq_lock(p, &rf);
> > -
> > + /*
> > + * Get an active rq, whose rq->rd traces the correct root
> > + * domain.
> > + * Ideally this would be under cpuset reader lock until rq->rd is
> > + * fetched. However, sleepable locks cannot nest inside pi_lock, so we
> > + * rely on the caller of dl_add_task_root_domain() holds 'cpuset_mutex'
> > + * to guarantee the CPU stays in the cpuset.
> > + */
> > + dl_get_task_effective_cpus(p, msk);
> > + cpu = cpumask_first_and(cpu_active_mask, msk);
> > + BUG_ON(cpu >= nr_cpu_ids);
> > + rq = cpu_rq(cpu);
> > dl_b = &rq->rd->dl_bw;
> > - raw_spin_lock(&dl_b->lock);
> > + /* End of fetching rd */
>
> Not sure we need this comment above. :)
>
OK, I can remove them to keep the code neat.
Thanks,
Pingfan
Powered by blists - more mailing lists