lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aTVAxCUVyBtv3hVh@gpd4>
Date: Sun, 7 Dec 2025 09:54:28 +0100
From: Andrea Righi <arighi@...dia.com>
To: Joel Fernandes <joelagnelf@...dia.com>
Cc: John Stultz <jstultz@...gle.com>, LKML <linux-kernel@...r.kernel.org>,
	Joel Fernandes <joelaf@...gle.com>,
	Qais Yousef <qyousef@...alina.io>, Ingo Molnar <mingo@...hat.com>,
	Peter Zijlstra <peterz@...radead.org>,
	Juri Lelli <juri.lelli@...hat.com>,
	Vincent Guittot <vincent.guittot@...aro.org>,
	Dietmar Eggemann <dietmar.eggemann@....com>,
	Valentin Schneider <vschneid@...hat.com>,
	Steven Rostedt <rostedt@...dmis.org>,
	Ben Segall <bsegall@...gle.com>,
	Zimuzo Ezeozue <zezeozue@...gle.com>, Mel Gorman <mgorman@...e.de>,
	Will Deacon <will@...nel.org>, Waiman Long <longman@...hat.com>,
	Boqun Feng <boqun.feng@...il.com>,
	"Paul E. McKenney" <paulmck@...nel.org>,
	Metin Kaya <Metin.Kaya@....com>,
	Xuewen Yan <xuewen.yan94@...il.com>,
	K Prateek Nayak <kprateek.nayak@....com>,
	Thomas Gleixner <tglx@...utronix.de>,
	Daniel Lezcano <daniel.lezcano@...aro.org>,
	Tejun Heo <tj@...nel.org>, David Vernet <void@...ifault.com>,
	Changwoo Min <changwoo@...lia.com>, sched-ext@...ts.linux.dev,
	kernel-team@...roid.com
Subject: Re: [RFC][PATCH] sched/ext: Split curr|donor references properly

On Fri, Dec 05, 2025 at 09:47:24PM -0500, Joel Fernandes wrote:
> On Sat, Dec 06, 2025 at 12:14:45AM +0000, John Stultz wrote:
> > With proxy-exec, we want to do the accounting against the donor
> > most of the time. Without proxy-exec, there should be no
> > difference as the rq->donor and rq->curr are the same.
> > 
> > So rework the logic to reference the rq->donor where appropriate.
> > 
> > Also add donor info to scx_dump_state()
> > 
> > Since CONFIG_SCHED_PROXY_EXEC currently depends on
> > !CONFIG_SCHED_CLASS_EXT, this should have no effect
> > (other then the extra donor output in scx_dump_state),
> > but this is one step needed to eventually remove that
> > constraint for proxy-exec.
> > 
> > Just wanted to send this out for early review prior to LPC.
> > 
> > Feedback or thoughts would be greatly appreciated!
> 
> Hi John,
> 
> I'm wondering if this will work well for BPF tasks because my understanding
> is that some scheduler BPF programs also monitor runtime statistics. If they are unaware of proxy execution, how will it work?

Right, some schedulers are relying on p->scx.slice to evaluate task
runtime. It'd be nice for the BPF schedulers to be aware of the donor.

> 
> I don't see any code in the patch that passes the donor information to the
> BPF ops, for instance. I would really like the SCX folks to chime in before
> we can move this patch forward. Thanks for marking it as an RFC.
> 
> We need to get a handle on how a scheduler BPF program will pass information
> about the donor to the currently executing task. If we can make this happen
> transparently, that's ideal. Otherwise, we may have to pass both the donor
> task and the currently executing task to the BPF ops.

That's what I was thinking, callbacks like ops.running(), ops.tick() and
ops.stopping() should probably have a struct task_struct *donor argument in
addition to struct task_struct *p. Then the BPF scheduler can decide how to
use the donor information (this would address also the runtime evaluation).

Thanks,
-Andrea

> 
> Thanks,
> 
>  - Joel
> 
> 
> > 
> > Signed-off-by: John Stultz <jstultz@...gle.com>
> > ---
> > Cc: Joel Fernandes <joelaf@...gle.com>
> > Cc: Qais Yousef <qyousef@...alina.io>
> > Cc: Ingo Molnar <mingo@...hat.com>
> > Cc: Peter Zijlstra <peterz@...radead.org>
> > Cc: Juri Lelli <juri.lelli@...hat.com>
> > Cc: Vincent Guittot <vincent.guittot@...aro.org>
> > Cc: Dietmar Eggemann <dietmar.eggemann@....com>
> > Cc: Valentin Schneider <vschneid@...hat.com>
> > Cc: Steven Rostedt <rostedt@...dmis.org>
> > Cc: Ben Segall <bsegall@...gle.com>
> > Cc: Zimuzo Ezeozue <zezeozue@...gle.com>
> > Cc: Mel Gorman <mgorman@...e.de>
> > Cc: Will Deacon <will@...nel.org>
> > Cc: Waiman Long <longman@...hat.com>
> > Cc: Boqun Feng <boqun.feng@...il.com>
> > Cc: "Paul E. McKenney" <paulmck@...nel.org>
> > Cc: Metin Kaya <Metin.Kaya@....com>
> > Cc: Xuewen Yan <xuewen.yan94@...il.com>
> > Cc: K Prateek Nayak <kprateek.nayak@....com>
> > Cc: Thomas Gleixner <tglx@...utronix.de>
> > Cc: Daniel Lezcano <daniel.lezcano@...aro.org>
> > Cc: Tejun Heo <tj@...nel.org>
> > Cc: David Vernet <void@...ifault.com>
> > Cc: Andrea Righi <arighi@...dia.com>
> > Cc: Changwoo Min <changwoo@...lia.com>
> > Cc: sched-ext@...ts.linux.dev
> > Cc: kernel-team@...roid.com
> > ---
> >  kernel/sched/ext.c | 31 +++++++++++++++++--------------
> >  1 file changed, 17 insertions(+), 14 deletions(-)
> > 
> > diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c
> > index 05f5a49e9649a..446091cba4429 100644
> > --- a/kernel/sched/ext.c
> > +++ b/kernel/sched/ext.c
> > @@ -938,17 +938,17 @@ static void touch_core_sched_dispatch(struct rq *rq, struct task_struct *p)
> >  
> >  static void update_curr_scx(struct rq *rq)
> >  {
> > -	struct task_struct *curr = rq->curr;
> > +	struct task_struct *donor = rq->donor;
> >  	s64 delta_exec;
> >  
> >  	delta_exec = update_curr_common(rq);
> >  	if (unlikely(delta_exec <= 0))
> >  		return;
> >  
> > -	if (curr->scx.slice != SCX_SLICE_INF) {
> > -		curr->scx.slice -= min_t(u64, curr->scx.slice, delta_exec);
> > -		if (!curr->scx.slice)
> > -			touch_core_sched(rq, curr);
> > +	if (donor->scx.slice != SCX_SLICE_INF) {
> > +		donor->scx.slice -= min_t(u64, donor->scx.slice, delta_exec);
> > +		if (!donor->scx.slice)
> > +			touch_core_sched(rq, donor);
> >  	}
> >  }
> >  
> > @@ -1090,14 +1090,14 @@ static void dispatch_enqueue(struct scx_sched *sch, struct scx_dispatch_q *dsq,
> >  		struct rq *rq = container_of(dsq, struct rq, scx.local_dsq);
> >  		bool preempt = false;
> >  
> > -		if ((enq_flags & SCX_ENQ_PREEMPT) && p != rq->curr &&
> > -		    rq->curr->sched_class == &ext_sched_class) {
> > -			rq->curr->scx.slice = 0;
> > +		if ((enq_flags & SCX_ENQ_PREEMPT) && p != rq->donor &&
> > +		    rq->donor->sched_class == &ext_sched_class) {
> > +			rq->donor->scx.slice = 0;
> >  			preempt = true;
> >  		}
> >  
> >  		if (preempt || sched_class_above(&ext_sched_class,
> > -						 rq->curr->sched_class))
> > +						 rq->donor->sched_class))
> >  			resched_curr(rq);
> >  	} else {
> >  		raw_spin_unlock(&dsq->lock);
> > @@ -2001,7 +2001,7 @@ static void dispatch_to_local_dsq(struct scx_sched *sch, struct rq *rq,
> >  		}
> >  
> >  		/* if the destination CPU is idle, wake it up */
> > -		if (sched_class_above(p->sched_class, dst_rq->curr->sched_class))
> > +		if (sched_class_above(p->sched_class, dst_rq->donor->sched_class))
> >  			resched_curr(dst_rq);
> >  	}
> >  
> > @@ -2424,7 +2424,7 @@ static struct task_struct *first_local_task(struct rq *rq)
> >  static struct task_struct *
> >  do_pick_task_scx(struct rq *rq, struct rq_flags *rf, bool force_scx)
> >  {
> > -	struct task_struct *prev = rq->curr;
> > +	struct task_struct *prev = rq->donor;
> >  	bool keep_prev, kick_idle = false;
> >  	struct task_struct *p;
> >  
> > @@ -3093,7 +3093,7 @@ int scx_check_setscheduler(struct task_struct *p, int policy)
> >  #ifdef CONFIG_NO_HZ_FULL
> >  bool scx_can_stop_tick(struct rq *rq)
> >  {
> > -	struct task_struct *p = rq->curr;
> > +	struct task_struct *p = rq->donor;
> >  
> >  	if (scx_rq_bypassing(rq))
> >  		return false;
> > @@ -4587,6 +4587,9 @@ static void scx_dump_state(struct scx_exit_info *ei, size_t dump_len)
> >  		dump_line(&ns, "          curr=%s[%d] class=%ps",
> >  			  rq->curr->comm, rq->curr->pid,
> >  			  rq->curr->sched_class);
> > +		dump_line(&ns, "          donor=%s[%d] class=%ps",
> > +			  rq->donor->comm, rq->donor->pid,
> > +			  rq->donor->sched_class);
> >  		if (!cpumask_empty(rq->scx.cpus_to_kick))
> >  			dump_line(&ns, "  cpus_to_kick   : %*pb",
> >  				  cpumask_pr_args(rq->scx.cpus_to_kick));
> > @@ -5426,7 +5429,7 @@ static bool kick_one_cpu(s32 cpu, struct rq *this_rq, unsigned long *ksyncs)
> >  	unsigned long flags;
> >  
> >  	raw_spin_rq_lock_irqsave(rq, flags);
> > -	cur_class = rq->curr->sched_class;
> > +	cur_class = rq->donor->sched_class;
> >  
> >  	/*
> >  	 * During CPU hotplug, a CPU may depend on kicking itself to make
> > @@ -5438,7 +5441,7 @@ static bool kick_one_cpu(s32 cpu, struct rq *this_rq, unsigned long *ksyncs)
> >  	    !sched_class_above(cur_class, &ext_sched_class)) {
> >  		if (cpumask_test_cpu(cpu, this_scx->cpus_to_preempt)) {
> >  			if (cur_class == &ext_sched_class)
> > -				rq->curr->scx.slice = 0;
> > +				rq->donor->scx.slice = 0;
> >  			cpumask_clear_cpu(cpu, this_scx->cpus_to_preempt);
> >  		}
> >  
> > -- 
> > 2.52.0.223.gf5cc29aaa4-goog
> > 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ