lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240830174014.GD5055@maniforge>
Date: Fri, 30 Aug 2024 12:40:14 -0500
From: David Vernet <void@...ifault.com>
To: Tejun Heo <tj@...nel.org>
Cc: linux-kernel@...r.kernel.org, Peter Zijlstra <peterz@...radead.org>,
	kernel-team@...a.com
Subject: Re: [PATCH 2/2 sched_ext/for-6.12] sched_ext: Use ktime_get_ns()
 instead of rq_clock_task() in touch_core_sched()

On Fri, Aug 30, 2024 at 12:52:48AM -1000, Tejun Heo wrote:
> Since sched_ext: Unpin and repin rq lock from balance_scx(), sched_ext's
> balance path terminates rq_pin in the outermost function. This is simpler
> and in line with what other balance functions are doing but it loses control
> over rq->clock_update_flags which makes assert_clock_udpated() trigger if
> other CPUs pins the rq lock.
> 
> The only place this matters is touch_core_sched() which uses the timestamp
> to order tasks from sibling rq's. For now, switch to ktime_get_ns(). Later,
> it'd be better to use per-core dispatch sequence number.
> 
> Signed-off-by: Tejun Heo <tj@...nel.org>
> Fixes: 3cf78c5d01d6 ("sched_ext: Unpin and repin rq lock from balance_scx()")
> Cc: Peter Zijlstra <peterz@...radead.org>
> ---
>  kernel/sched/ext.c |   10 ++++++++--
>  1 file changed, 8 insertions(+), 2 deletions(-)
> 
> --- a/kernel/sched/ext.c
> +++ b/kernel/sched/ext.c
> @@ -1453,13 +1453,20 @@ static void schedule_deferred(struct rq
>   */
>  static void touch_core_sched(struct rq *rq, struct task_struct *p)
>  {
> +	lockdep_assert_rq_held(rq);
> +
>  #ifdef CONFIG_SCHED_CORE
>  	/*
>  	 * It's okay to update the timestamp spuriously. Use
>  	 * sched_core_disabled() which is cheaper than enabled().
> +	 *
> +	 * TODO: ktime_get_ns() is used because rq_clock_task() can't be used as
> +	 * SCX balance path doesn't pin the rq. As this is used to determine
> +	 * ordering between tasks of sibling CPUs, it'd be better to use
> +	 * per-core dispatch sequence instead.
>  	 */
>  	if (!sched_core_disabled())
> -		p->scx.core_sched_at = rq_clock_task(rq);
> +		p->scx.core_sched_at = ktime_get_ns();

Should we just use sched_clock_cpu()? That's what rq->clock is updated
from, and it's what fair.c does on the balance path when the rq lock is
unpinned.

Thanks,
David

>  #endif
>  }
>  
> @@ -1476,7 +1483,6 @@ static void touch_core_sched(struct rq *
>  static void touch_core_sched_dispatch(struct rq *rq, struct task_struct *p)
>  {
>  	lockdep_assert_rq_held(rq);
> -	assert_clock_updated(rq);
>  
>  #ifdef CONFIG_SCHED_CORE
>  	if (SCX_HAS_OP(core_sched_before))

Download attachment "signature.asc" of type "application/pgp-signature" (229 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ