[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240806211002.GA37996@noisy.programming.kicks-ass.net>
Date: Tue, 6 Aug 2024 23:10:02 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Tejun Heo <tj@...nel.org>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>,
linux-kernel@...r.kernel.org, David Vernet <void@...ifault.com>,
Ingo Molnar <mingo@...hat.com>, Alexei Starovoitov <ast@...nel.org>,
Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [GIT PULL] sched_ext: Initial pull request for v6.11
On Tue, Jul 30, 2024 at 03:36:27PM -1000, Tejun Heo wrote:
> Hello,
>
> On Wed, Jul 24, 2024 at 10:52:21AM +0200, Peter Zijlstra wrote:
> ...
> > So pick_task() came from the SCHED_CORE crud, which does a remote pick
> > and as such isn't able to do a put -- remote is still running its
> > current etc.
> >
> > So pick_task() *SHOULD* already be considering its current and pick
> > that if it is a better candidate than whatever is on the queue.
> >
> > If we have a pick_task() that doesn't do that, it's a pre-existing bug
> > and needs fixing anyhow.
>
> Right, I don't think it affects SCX in any significant way. Either way
> should be fine.
So I just looked at things. And considering we currently want to have:
pick_next_task := pick_task() + set_next_task(.first = true)
and want to, with those other patches moving put_prev_task() around, get
to fully making pick_next_task() optional, it looks to me you're not
quite there yet. Notably:
> +static void set_next_task_scx(struct rq *rq, struct task_struct *p, bool first)
> +{
> + if (p->scx.flags & SCX_TASK_QUEUED) {
> + /*
> + * Core-sched might decide to execute @p before it is
> + * dispatched. Call ops_dequeue() to notify the BPF scheduler.
> + */
> + ops_dequeue(p, SCX_DEQ_CORE_SCHED_EXEC);
> + dispatch_dequeue(rq, p);
> + }
> +
> + p->se.exec_start = rq_clock_task(rq);
> +
> + /* see dequeue_task_scx() on why we skip when !QUEUED */
> + if (SCX_HAS_OP(running) && (p->scx.flags & SCX_TASK_QUEUED))
> + SCX_CALL_OP_TASK(SCX_KF_REST, running, p);
> +
> + clr_task_runnable(p, true);
> +
> + /*
> + * @p is getting newly scheduled or got kicked after someone updated its
> + * slice. Refresh whether tick can be stopped. See scx_can_stop_tick().
> + */
> + if ((p->scx.slice == SCX_SLICE_INF) !=
> + (bool)(rq->scx.flags & SCX_RQ_CAN_STOP_TICK)) {
> + if (p->scx.slice == SCX_SLICE_INF)
> + rq->scx.flags |= SCX_RQ_CAN_STOP_TICK;
> + else
> + rq->scx.flags &= ~SCX_RQ_CAN_STOP_TICK;
> +
> + sched_update_tick_dependency(rq);
> +
> + /*
> + * For now, let's refresh the load_avgs just when transitioning
> + * in and out of nohz. In the future, we might want to add a
> + * mechanism which calls the following periodically on
> + * tick-stopped CPUs.
> + */
> + update_other_load_avgs(rq);
> + }
> +}
> +static struct task_struct *pick_next_task_scx(struct rq *rq)
> +{
> + struct task_struct *p;
> +
> +#ifndef CONFIG_SMP
> + /* UP workaround - see the comment at the head of put_prev_task_scx() */
> + if (unlikely(rq->curr->sched_class != &ext_sched_class))
> + balance_one(rq, rq->curr, true);
> +#endif
(should already be gone in your latest branch)
> +
> + p = first_local_task(rq);
> + if (!p)
> + return NULL;
> +
> + set_next_task_scx(rq, p, true);
> +
> + if (unlikely(!p->scx.slice)) {
> + if (!scx_ops_bypassing() && !scx_warned_zero_slice) {
> + printk_deferred(KERN_WARNING "sched_ext: %s[%d] has zero slice in pick_next_task_scx()\n",
> + p->comm, p->pid);
> + scx_warned_zero_slice = true;
> + }
> + p->scx.slice = SCX_SLICE_DFL;
> + }
This condition should probably move to set_next_task_scx(.first = true).
> +
> + return p;
> +}
> +/**
> + * pick_task_scx - Pick a candidate task for core-sched
> + * @rq: rq to pick the candidate task from
> + *
> + * Core-sched calls this function on each SMT sibling to determine the next
> + * tasks to run on the SMT siblings. balance_one() has been called on all
> + * siblings and put_prev_task_scx() has been called only for the current CPU.
> + *
> + * As put_prev_task_scx() hasn't been called on remote CPUs, we can't just look
> + * at the first task in the local dsq. @rq->curr has to be considered explicitly
> + * to mimic %SCX_TASK_BAL_KEEP.
> + */
> +static struct task_struct *pick_task_scx(struct rq *rq)
> +{
> + struct task_struct *curr = rq->curr;
> + struct task_struct *first = first_local_task(rq);
> +
> + if (curr->scx.flags & SCX_TASK_QUEUED) {
> + /* is curr the only runnable task? */
> + if (!first)
> + return curr;
> +
> + /*
> + * Does curr trump first? We can always go by core_sched_at for
> + * this comparison as it represents global FIFO ordering when
> + * the default core-sched ordering is used and local-DSQ FIFO
> + * ordering otherwise.
> + *
> + * We can have a task with an earlier timestamp on the DSQ. For
> + * example, when a current task is preempted by a sibling
> + * picking a different cookie, the task would be requeued at the
> + * head of the local DSQ with an earlier timestamp than the
> + * core-sched picked next task. Besides, the BPF scheduler may
> + * dispatch any tasks to the local DSQ anytime.
> + */
> + if (curr->scx.slice && time_before64(curr->scx.core_sched_at,
> + first->scx.core_sched_at))
> + return curr;
> + }
And the above condition seems a little core_sched specific. Is that
suitable for the primary pick function?
> +
> + return first; /* this may be %NULL */
> +}
Powered by blists - more mailing lists