lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <aNxseDZ9bJbKQ36-@slm.duckdns.org>
Date: Tue, 30 Sep 2025 13:49:12 -1000
From: Tejun Heo <tj@...nel.org>
To: Peter Zijlstra <peterz@...radead.org>
Cc: linux-kernel@...r.kernel.org, mingo@...hat.com, juri.lelli@...hat.com,
	vincent.guittot@...aro.org, dietmar.eggemann@....com,
	rostedt@...dmis.org, bsegall@...gle.com, mgorman@...e.de,
	vschneid@...hat.com, longman@...hat.com, hannes@...xchg.org,
	mkoutny@...e.com, void@...ifault.com, arighi@...dia.com,
	changwoo@...lia.com, cgroups@...r.kernel.org,
	sched-ext@...ts.linux.dev, liuwenfang@...or.com, tglx@...utronix.de
Subject: Re: [PATCH 12/14] sched: Add shared runqueue locking to
 __task_rq_lock()

Hello, Peter.

On Mon, Sep 29, 2025 at 12:06:58PM +0200, Peter Zijlstra wrote:
...
> > However, thinking more about it. I'm unsure how e.g. the actual migration
> > would work. The actual migration is done by: deactivate_task() ->
> > set_task_cpu() -> switch rq locks -> activate_task(). Enqueueing/dequeueing
> > steps have operations that depend on rq lock - psi updates, uclamp updates
> > and so on. How would they work?
> 
> Suppose __task_rq_lock() will take rq->lock and p->sub_lock, in that
> order, such that task_rq_lock() will take p->pi_lock, rq->lock and
> p->sub_lock.
> 
...
> While at the same time, above you argued p->sub_lock should be inside
> dsq->lock. Because:
> 
> __schedule()
>   rq = this_rq();
>   LOCK rq->lock
>   next = pick_next() := pick_next_scx()
>     LOCK dsq->lock
>     p = find_task(dsq);
>     LOCK p->sub_lock
>     dequeue(dsq, p);
>     UNLOCK dsq->lock

I was going back and forth with the locking order. Note that even if
sub_lock is nested outside DSQ locks, the hot path for the pick_task() path
wouldn't be that different - it just needs RCU protected first_task pointer
and the DSQ association needs to be verified after grabbing the sub_lock
(much like how task_rq_lock() needs to retry).

> Because if you did something like:
> 
> __schedule()
>   rq = this_rq();
>   LOCK rq->lock
>   next = pick_next() := pick_next_scx()
>     LOCK dsq->lock (or RCU, doesn't matter)
>     p = find_task(dsq);
>     UNLOCK dsq->lock
> 				migrate:
> 				LOCK p->pi_lock
> 				rq = task_rq(p)
> 				LOCK rq->lock
> 				(verify bla bla)
> 				LOCK p->sub_lock
> 				LOCK dsq->lock
> 				dequeue(dsq, p)
> 				UNLOCK dsq->lock
> 				set_task_cpu(n);
> 				UNLOCK rq->lock
> 				rq = cpu_rq(n);
> 				LOCK rq->lock (inversion vs p->sub_lock)
> 				LOCK dsq2->lock
> 				enqueue(dsq2, p)
> 				UNLOCK dsq2->lock
> 
>     LOCK p->sub_lock
>     LOCK dsq->lock   (whoopsie, p is on dsq2)
>     dequeue(dsq, p)
>     set_task_cpu(here);
>     UNLOCK dsq->lock

I suppose the above is showing that p->sub_lock is nested outside dsq->lock,
right? If so, the right sequence would be:

        LOCK p->sub_lock
        LOCK src_dsq->lock
        verify p is still on src_dsq, if not retry
        remove from src_dsq
        UNLOCK src_dsq->lock
        LOCK dst_dsq->lock
        insert into dst_dsq
        UNLOCK dst_dsq->lock
        UNLOCK p->sub_lock

> That is, either way around: dsq->lock outside, p->sub_lock inside, or
> the other way around, I emd up with inversions and race conditions that
> are not fun.

It's not straightforward for sure but none of these approaches are. They are
all complicated in some areas. From sched_ext POV, I think what matter most
are providing as much latitude as possible to BPF scheduler implementations
and having lower likelihood of really subtle issues.

> Also, if you do put p->sub_lock inside dsq->lock, this means
> __task_rq_lock() cannot take it and it needs to be pushed deep into scx
> (possibly into bpf ?) and that means I'm not sure how to do the change
> pattern sanely.

I'm not quite following why task_rq_lock() wouldn't be able to take it.
Whether p->sub_lock nests under or over DSQ locks should only matter to
sched_ext proper. From core's POV, the only thing that matters is that as
long as p->sub_lock is held, the task won't be migrating and is safe to
change states for (more on this later).

> Having __task_rq_lock() take p->dsq->lock solves all these problems,
> except for that one weird case where BPF wants to do things their own
> way. The longer I'm thinking about it, the more I dislike that. I just
> don't see *ANY* upside from allowing BPF to do this while it is making
> everything else quite awkward.

I'm failing to see the problems you're seeing and have to disagree that
allowing more capabilities on BPF side doesn't bring any upsides. DSQs are
useful but it quickly becomes too restrictive - e.g. people often want to
put the same task on multiple queues and other data structures, which is a
lot more straightforward to do if the data structure and locking are managed
in BPF. In general, I don't think it's a good direction to be prescriptive
about how schedulers should be implemented or behave. Even if we might not
be able to think up something neat right now, someone will.

> The easy fix is to have these BPF managed things have a single global
> lock. That works and is correct. Then if they want something better,
> they can use DSQs :-)
> 
> Fundamentally, we need the DSQ->lock to cover all CPUs that will pick
> from it, there is no wiggle room there. Also note that while we change
> only the attributes of a single task with the change pattern, that
> affects the whole RQ, since a runqueue is an aggregate of all tasks.
> This is very much why dequeue/enqueue around the change pattern, to keep
> the runqueue aggregates updated.
> 
> Use the BPF thing to play with scheduling policies, but leave the
> locking to the core code.

I have two questions:

- Let's say something works, whether that's holding dsq->lock or
  p->sub_lock. I still don't understand how things would be safe w.r.t.
  things like PSI and uclamp updates. How would they cope with
  set_task_cpu() happening without the rq locked?

- This all started from two proposed core changes. One additional hook after
  task pick regardless of the picked task's class (this is a regression that
  I missed during the pick_task() conversion) and balance() call for
  deadline server, which I think can be combined with existing special case
  for sched_ext. While it'd be nice to be able to migrate without holding rq
  locks, that path seems very invasive and to have a lot of proposed
  capability impacts. This doesn't seem like a particularly productive
  direection to me.

Thanks.

-- 
tejun

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ