lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Zo8ckk3MFygpEZbL@slm.duckdns.org>
Date: Wed, 10 Jul 2024 13:43:14 -1000
From: Tejun Heo <tj@...nel.org>
To: David Vernet <void@...ifault.com>
Cc: linux-kernel@...r.kernel.org, kernel-team@...a.com,
	schatzberg.dan@...il.com, mingo@...hat.com, peterz@...radead.org,
	changwoo@...lia.com, righi.andrea@...il.com
Subject: Re: [PATCH 5/6] sched_ext: Allow SCX_DSQ_LOCAL_ON for direct
 dispatches

Hello,

On Wed, Jul 10, 2024 at 01:51:35PM -0500, David Vernet wrote:
...
> > +	/*
> > +	 * If in balance, the balance callbacks will be called before rq lock is
> > +	 * released. Schedule one.
> > +	 */
> > +	if (rq->scx.flags & SCX_RQ_IN_BALANCE)
> > +		queue_balance_callback(rq, &rq->scx.deferred_bal_cb,
> > +				       deferred_bal_cb_workfn);
> 
> Should we be returning above if we're able to use a balance cb?

Oh yeah, it definitely should.

> Also, should we maybe add a WARN_ON_ONCE(rq->balance_callback ==
> &balance_push_callback)? I could see that being unnecessary given that we
> should never be hitting this path when the CPU is deactivated anyways, but the
> push callback thing is a kindn extraneous implementation detail so might be
> worth guarding against it just in case.

I'm not sure about that. It feels like a sched core detail which is better
left within sched core code rather than pushing to queue_balance_callback()
users. The deferred execution mechanism itself doesn't really care about how
the callback is run or the state of the CPU.

...
> > +static void process_ddsp_deferred_locals(struct rq *rq)
> > +{
> > +	struct task_struct *p, *tmp;
> > +
> > +	lockdep_assert_rq_held(rq);
> > +
> > +	/*
> > +	 * Now that @rq can be unlocked, execute the deferred enqueueing of
> > +	 * tasks directly dispatched to the local DSQs of other CPUs. See
> > +	 * direct_dispatch().
> > +	 */
> > +	list_for_each_entry_safe(p, tmp, &rq->scx.ddsp_deferred_locals,
> > +				 scx.dsq_list.node) {
> > +		s32 ret;
> > +
> > +		list_del_init(&p->scx.dsq_list.node);
> > +
> > +		ret = dispatch_to_local_dsq(rq, NULL, p->scx.ddsp_dsq_id, p,
> > +					    p->scx.ddsp_enq_flags);
> > +		WARN_ON_ONCE(ret == DTL_NOT_LOCAL);
> > +	}
> 
> As mentioned in the other thread, it might be simplest to just pin and unpin
> around this loop here to keep the logic simpler in the callee.

Yeah, I'm moving unpinning/repinning to outer balance function.

> > @@ -3589,6 +3714,7 @@ DEFINE_SCHED_CLASS(ext) = {
> >  #ifdef CONFIG_SMP
> >  	.balance		= balance_scx,
> >  	.select_task_rq		= select_task_rq_scx,
> > +	.task_woken		= task_woken_scx,
> 
> Should we update the comment in the caller in core.c given that rq is no longer
> only used for statistics tracking?

task_woken_rt() is already doing push_rt_task() which is kinda similar to
what scx is doing, so the comment was already stale. Yeah, please go ahead
and send a patch.

Thanks.

-- 
tejun

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ