lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aJqExL-CjemhWfqB@slm.duckdns.org>
Date: Mon, 11 Aug 2025 14:03:16 -1000
From: 'Tejun Heo' <tj@...nel.org>
To: liuwenfang <liuwenfang@...or.com>
Cc: 'David Vernet' <void@...ifault.com>, 'Andrea Righi' <arighi@...dia.com>,
	'Changwoo Min' <changwoo@...lia.com>,
	'Ingo Molnar' <mingo@...hat.com>,
	'Peter Zijlstra' <peterz@...radead.org>,
	'Juri Lelli' <juri.lelli@...hat.com>,
	'Vincent Guittot' <vincent.guittot@...aro.org>,
	'Dietmar Eggemann' <dietmar.eggemann@....com>,
	'Steven Rostedt' <rostedt@...dmis.org>,
	'Ben Segall' <bsegall@...gle.com>, 'Mel Gorman' <mgorman@...e.de>,
	'Valentin Schneider' <vschneid@...hat.com>,
	"'linux-kernel@...r.kernel.org'" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v3 1/3] sched_ext: Fix pnt_seq calculation

Hello,

Sorry for another delay. I'm finallyback from a long vacation and should be
more responsive from now on.

On Sun, Jul 20, 2025 at 09:36:22AM +0000, liuwenfang wrote:
> Fix pnt_seq calculation for all transitions.

This needs a lot more explanation about the bug it fixes and how.

> +void scx_put_prev_set_next(struct rq *rq, struct task_struct *prev,
> +			   struct task_struct *next)
> +{
> +#ifdef CONFIG_SMP
> +	/*
> +	 * Pairs with the smp_load_acquire() issued by a CPU in
> +	 * kick_cpus_irq_workfn() who is waiting for this CPU to perform a
> +	 * resched.
> +	 */
> +	smp_store_release(&rq->scx.pnt_seq, rq->scx.pnt_seq + 1);
> +#endif
> +}

Let's use a more specific name - e.g. something like scx_bump_sched_seq().
Note that pnt_seq is a bit of misnomer at this point. We probablys should
rename it to sched_seq in a separate patch.

> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 0fb9bf995..50d757e92 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -8887,6 +8887,9 @@ pick_next_task_fair(struct rq *rq, struct task_struct *prev, struct rq_flags *rf
>  
>  	__put_prev_set_next_dl_server(rq, prev, p);
>  
> +	if (scx_enabled())
> +		scx_put_prev_set_next(rq, prev, p);
> +
>  	/*
>  	 * Because of the set_next_buddy() in dequeue_task_fair() it is rather
>  	 * likely that a next task is from the same cgroup as the current.
> diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> index 47972f34e..bcb7f175c 100644
> @@ -2465,6 +2470,9 @@ static inline void put_prev_set_next_task(struct rq *rq,
>  
>  	__put_prev_set_next_dl_server(rq, prev, next);
>  
> +	if (scx_enabled())
> +		scx_put_prev_set_next(rq, prev, next);
> +
>  	if (next == prev)
>  		return;

I'm not sure these are the best spots to call this function. How about
putting it in the CONFIG_SCHED_CLASS_EXT section in prev_balance()? The goal
of the seq counter is to wait for scheduler path to be entered, so that's
good enough a spot and there already is scx specific section, so it doesn't
add too much noise.

Thanks.

-- 
tejun

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ