lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAF+s44QRGy77LYLGO+DK0x6ytCqaXD8XvE==ZHLJ-5XHteibKg@mail.gmail.com>
Date: Mon, 24 Nov 2025 11:56:12 +0800
From: Pingfan Liu <piliu@...hat.com>
To: Waiman Long <llong@...hat.com>
Cc: Juri Lelli <juri.lelli@...hat.com>, linux-kernel@...r.kernel.org, 
	Chen Ridong <chenridong@...weicloud.com>, Peter Zijlstra <peterz@...radead.org>, 
	Pierre Gondois <pierre.gondois@....com>, Ingo Molnar <mingo@...hat.com>, 
	Vincent Guittot <vincent.guittot@...aro.org>, Dietmar Eggemann <dietmar.eggemann@....com>, 
	Steven Rostedt <rostedt@...dmis.org>, Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>, 
	Valentin Schneider <vschneid@...hat.com>, Tejun Heo <tj@...nel.org>, 
	Johannes Weiner <hannes@...xchg.org>, mkoutny@...e.com
Subject: Re: [PATCHv7 2/2] sched/deadline: Walk up cpuset hierarchy to decide
 root domain when hot-unplug

On Mon, Nov 24, 2025 at 10:24 AM Waiman Long <llong@...hat.com> wrote:
>
[...]
> > Currently, all cpuset locks are encapsulated in
> > kernel/cgroup/cpuset-internal.h. I'm not sure if it's appropriate to
> > expose them. If exposing them is acceptable,
> > cpuset_callback_lock_irq()/cpuset_callback_unlock_irq() would be
> > preferable to cpuset_mutex assertion.
> >
> > @Waiman, @Ridong, could you kindly share your opinion?
>
> The cpuset_cpus_allowed_locked() already has a
> "lockdep_assert_held(&cpuset_mutex)"  call to make sure that
> cpuset_mutex is held, or a warning will be printed by the debug kernel.
> So a check is there, it is just not in the deadline.c code. The
> dl_add_task_root_domain() is called indirectly from
> dl_rebuild_rd_accounting() in cpuset.c which does have an assertion on
> cpuset_mutex.
>
> There is an external visible cpuset_lock/unlock() to acquire and release
> the cpuset_mutex. However, there is no public API to assert that
> cpuset_mutex is held. There is another set of patch series that is going
> to add that in the near future. At this point, I don't think we need to
> have such an API yet. I will suggest adding comment
> to cpuset_cpus_allowed_locked() that it will warn if cpuset_mutex isn't
> held.
>
> Providing a cpuset_callback_{lock|unlock}_irq() helpers may not be
> helpful because we are back to the problem that callback_lock isn't a
> raw_spinlock_t.
>

I meant to put them outside the pi_lock, so it can reflect the
original purpose of this section -- cpuset read access instead of
write. But yes, I agree that at this point, there is no need to
introduce a public API.

> >
> >>>   void dl_add_task_root_domain(struct task_struct *p)
> >>>   {
[...]

> >>> +   /*
> >>> +    * Get an active rq, whose rq->rd traces the correct root
> >>> +    * domain.
> >>> +    * Ideally this would be under cpuset reader lock until rq->rd is
> >>> +    * fetched.  However, sleepable locks cannot nest inside pi_lock, so we
> >>> +    * rely on the caller of dl_add_task_root_domain() holds 'cpuset_mutex'
> >>> +    * to guarantee the CPU stays in the cpuset.
> >>> +    */
> >>> +   dl_get_task_effective_cpus(p, msk);
> >>> +   cpu = cpumask_first_and(cpu_active_mask, msk);
> >>> +   BUG_ON(cpu >= nr_cpu_ids);
> >>> +   rq = cpu_rq(cpu);
> >>>     dl_b = &rq->rd->dl_bw;
> >>> -   raw_spin_lock(&dl_b->lock);
> >>> +   /* End of fetching rd */
> >> Not sure we need this comment above. :)
> >>
> > OK, I can remove them to keep the code neat.

@Juri, sorry - I need to send out a fix that should be simple and
focused: just the fix itself, without removing the comments. So I have
not removed them. Anyway, they can remind us this is an atomic cpuset
read context.


Best Regards,

Pingfan


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ