lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <aSBjm3mN_uIy64nz@jlelli-thinkpadt14gen4.remote.csb>
Date: Fri, 21 Nov 2025 14:05:31 +0100
From: Juri Lelli <juri.lelli@...hat.com>
To: Pingfan Liu <piliu@...hat.com>
Cc: linux-kernel@...r.kernel.org, Waiman Long <longman@...hat.com>,
	Chen Ridong <chenridong@...weicloud.com>,
	Peter Zijlstra <peterz@...radead.org>,
	Pierre Gondois <pierre.gondois@....com>,
	Ingo Molnar <mingo@...hat.com>,
	Vincent Guittot <vincent.guittot@...aro.org>,
	Dietmar Eggemann <dietmar.eggemann@....com>,
	Steven Rostedt <rostedt@...dmis.org>,
	Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
	Valentin Schneider <vschneid@...hat.com>, Tejun Heo <tj@...nel.org>,
	Johannes Weiner <hannes@...xchg.org>, mkoutny@...e.com
Subject: Re: [PATCHv7 2/2] sched/deadline: Walk up cpuset hierarchy to decide
 root domain when hot-unplug

Hi!

On 19/11/25 17:55, Pingfan Liu wrote:

...

> +/* Access rule: must be called on local CPU with preemption disabled */
>  static DEFINE_PER_CPU(cpumask_var_t, local_cpu_mask_dl);

...

> +/* The caller should hold cpuset_mutex */

Maybe we can add a lockdep explicit check?

>  void dl_add_task_root_domain(struct task_struct *p)
>  {
>  	struct rq_flags rf;
>  	struct rq *rq;
>  	struct dl_bw *dl_b;
> +	unsigned int cpu;
> +	struct cpumask *msk = this_cpu_cpumask_var_ptr(local_cpu_mask_dl);

Can this corrupt local_cpu_mask_dl?

Without preemption being disabled, the following race can occur:

1. Thread calls dl_add_task_root_domain() on CPU 0
2. Gets pointer to CPU 0's local_cpu_mask_dl
3. Thread is preempted and migrated to CPU 1
4. Thread continues using CPU 0's local_cpu_mask_dl
5. Meanwhile, the scheduler on CPU 0 calls find_later_rq() which also
   uses local_cpu_mask_dl (with preemption properly disabled)
6. Both contexts now corrupt the same per-CPU buffer concurrently

>  
>  	raw_spin_lock_irqsave(&p->pi_lock, rf.flags);

It's safe to get the pointer after this point.

>  	if (!dl_task(p) || dl_entity_is_special(&p->dl)) {
> @@ -2919,16 +2952,25 @@ void dl_add_task_root_domain(struct task_struct *p)
>  		return;
>  	}
>  
> -	rq = __task_rq_lock(p, &rf);
> -
> +	/*
> +	 * Get an active rq, whose rq->rd traces the correct root
> +	 * domain.
> +	 * Ideally this would be under cpuset reader lock until rq->rd is
> +	 * fetched.  However, sleepable locks cannot nest inside pi_lock, so we
> +	 * rely on the caller of dl_add_task_root_domain() holds 'cpuset_mutex'
> +	 * to guarantee the CPU stays in the cpuset.
> +	 */
> +	dl_get_task_effective_cpus(p, msk);
> +	cpu = cpumask_first_and(cpu_active_mask, msk);
> +	BUG_ON(cpu >= nr_cpu_ids);
> +	rq = cpu_rq(cpu);
>  	dl_b = &rq->rd->dl_bw;
> -	raw_spin_lock(&dl_b->lock);
> +	/* End of fetching rd */

Not sure we need this comment above. :)

> +	raw_spin_lock(&dl_b->lock);
>  	__dl_add(dl_b, p->dl.dl_bw, cpumask_weight(rq->rd->span));
> -
>  	raw_spin_unlock(&dl_b->lock);
> -
> -	task_rq_unlock(rq, p, &rf);
> +	raw_spin_unlock_irqrestore(&p->pi_lock, rf.flags);
>  }

Thanks,
Juri


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ