[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <0f57e7df-c45b-4c25-856b-4dd240f8d717@redhat.com>
Date: Sun, 23 Nov 2025 21:24:18 -0500
From: Waiman Long <llong@...hat.com>
To: Pingfan Liu <piliu@...hat.com>, Juri Lelli <juri.lelli@...hat.com>
Cc: linux-kernel@...r.kernel.org, Chen Ridong <chenridong@...weicloud.com>,
Peter Zijlstra <peterz@...radead.org>,
Pierre Gondois <pierre.gondois@....com>, Ingo Molnar <mingo@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>, Ben Segall <bsegall@...gle.com>,
Mel Gorman <mgorman@...e.de>, Valentin Schneider <vschneid@...hat.com>,
Tejun Heo <tj@...nel.org>, Johannes Weiner <hannes@...xchg.org>,
mkoutny@...e.com
Subject: Re: [PATCHv7 2/2] sched/deadline: Walk up cpuset hierarchy to decide
root domain when hot-unplug
On 11/23/25 8:45 PM, Pingfan Liu wrote:
> On Fri, Nov 21, 2025 at 02:05:31PM +0100, Juri Lelli wrote:
>> Hi!
>>
>> On 19/11/25 17:55, Pingfan Liu wrote:
>>
>> ...
>>
>>> +/* Access rule: must be called on local CPU with preemption disabled */
>>> static DEFINE_PER_CPU(cpumask_var_t, local_cpu_mask_dl);
>> ...
>>
>>> +/* The caller should hold cpuset_mutex */
>> Maybe we can add a lockdep explicit check?
>>
> Currently, all cpuset locks are encapsulated in
> kernel/cgroup/cpuset-internal.h. I'm not sure if it's appropriate to
> expose them. If exposing them is acceptable,
> cpuset_callback_lock_irq()/cpuset_callback_unlock_irq() would be
> preferable to cpuset_mutex assertion.
>
> @Waiman, @Ridong, could you kindly share your opinion?
The cpuset_cpus_allowed_locked() already has a
"lockdep_assert_held(&cpuset_mutex)" call to make sure that
cpuset_mutex is held, or a warning will be printed by the debug kernel.
So a check is there, it is just not in the deadline.c code. The
dl_add_task_root_domain() is called indirectly from
dl_rebuild_rd_accounting() in cpuset.c which does have an assertion on
cpuset_mutex.
There is an external visible cpuset_lock/unlock() to acquire and release
the cpuset_mutex. However, there is no public API to assert that
cpuset_mutex is held. There is another set of patch series that is going
to add that in the near future. At this point, I don't think we need to
have such an API yet. I will suggest adding comment
to cpuset_cpus_allowed_locked() that it will warn if cpuset_mutex isn't
held.
Providing a cpuset_callback_{lock|unlock}_irq() helpers may not be
helpful because we are back to the problem that callback_lock isn't a
raw_spinlock_t.
Cheers,
Longman
>
>>> void dl_add_task_root_domain(struct task_struct *p)
>>> {
>>> struct rq_flags rf;
>>> struct rq *rq;
>>> struct dl_bw *dl_b;
>>> + unsigned int cpu;
>>> + struct cpumask *msk = this_cpu_cpumask_var_ptr(local_cpu_mask_dl);
>> Can this corrupt local_cpu_mask_dl?
>>
>> Without preemption being disabled, the following race can occur:
>>
>> 1. Thread calls dl_add_task_root_domain() on CPU 0
>> 2. Gets pointer to CPU 0's local_cpu_mask_dl
>> 3. Thread is preempted and migrated to CPU 1
>> 4. Thread continues using CPU 0's local_cpu_mask_dl
>> 5. Meanwhile, the scheduler on CPU 0 calls find_later_rq() which also
>> uses local_cpu_mask_dl (with preemption properly disabled)
>> 6. Both contexts now corrupt the same per-CPU buffer concurrently
>>
> Oh, that is definitely an issue. Thanks for pointing it out.
>
>>>
>>> raw_spin_lock_irqsave(&p->pi_lock, rf.flags);
>> It's safe to get the pointer after this point.
>>
> Yes.
>>> if (!dl_task(p) || dl_entity_is_special(&p->dl)) {
>>> @@ -2919,16 +2952,25 @@ void dl_add_task_root_domain(struct task_struct *p)
>>> return;
>>> }
>>>
>>> - rq = __task_rq_lock(p, &rf);
>>> -
>>> + /*
>>> + * Get an active rq, whose rq->rd traces the correct root
>>> + * domain.
>>> + * Ideally this would be under cpuset reader lock until rq->rd is
>>> + * fetched. However, sleepable locks cannot nest inside pi_lock, so we
>>> + * rely on the caller of dl_add_task_root_domain() holds 'cpuset_mutex'
>>> + * to guarantee the CPU stays in the cpuset.
>>> + */
>>> + dl_get_task_effective_cpus(p, msk);
>>> + cpu = cpumask_first_and(cpu_active_mask, msk);
>>> + BUG_ON(cpu >= nr_cpu_ids);
>>> + rq = cpu_rq(cpu);
>>> dl_b = &rq->rd->dl_bw;
>>> - raw_spin_lock(&dl_b->lock);
>>> + /* End of fetching rd */
>> Not sure we need this comment above. :)
>>
> OK, I can remove them to keep the code neat.
>
>
> Thanks,
>
> Pingfan
>
Powered by blists - more mailing lists