lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240710161025.GA317151@maniforge>
Date: Wed, 10 Jul 2024 11:10:25 -0500
From: David Vernet <void@...ifault.com>
To: Tejun Heo <tj@...nel.org>
Cc: linux-kernel@...r.kernel.org, kernel-team@...a.com,
	schatzberg.dan@...il.com, mingo@...hat.com, peterz@...radead.org,
	changwoo@...lia.com, righi.andrea@...il.com
Subject: Re: [PATCH 3/6] sched_ext: Make @rf optional for
 dispatch_to_local_dsq()

On Tue, Jul 09, 2024 at 11:21:09AM -1000, Tejun Heo wrote:
> dispatch_to_local_dsq() may unlock the current rq when dispatching to a
> local DSQ on a different CPU. This function is currently called from the
> balance path where the rq lock is pinned and the associated rq_flags is
> available to unpin it.
> 
> dispatch_to_local_dsq() will be used to implement direct dispatch to a local
> DSQ on a remote CPU from the enqueue path where it will be called with rq
> locked but not pinned. Make @rf optional so that the function can be used
> from a context which doesn't pin the rq lock.
> 
> Signed-off-by: Tejun Heo <tj@...nel.org>
> Cc: David Vernet <void@...ifault.com>
> ---
>  kernel/sched/ext.c | 21 +++++++++++++--------
>  1 file changed, 13 insertions(+), 8 deletions(-)
> 
> diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c
> index 52340ac8038f..e96727460df2 100644
> --- a/kernel/sched/ext.c
> +++ b/kernel/sched/ext.c
> @@ -2040,7 +2040,7 @@ static bool move_task_to_local_dsq(struct rq *rq, struct task_struct *p,
>  /**
>   * dispatch_to_local_dsq_lock - Ensure source and destination rq's are locked
>   * @rq: current rq which is locked
> - * @rf: rq_flags to use when unlocking @rq
> + * @rf: optional rq_flags to use when unlocking @rq if its lock is pinned
>   * @src_rq: rq to move task from
>   * @dst_rq: rq to move task to
>   *
> @@ -2052,17 +2052,20 @@ static bool move_task_to_local_dsq(struct rq *rq, struct task_struct *p,
>  static void dispatch_to_local_dsq_lock(struct rq *rq, struct rq_flags *rf,
>  				       struct rq *src_rq, struct rq *dst_rq)
>  {
> -	rq_unpin_lock(rq, rf);
> +	if (rf)
> +		rq_unpin_lock(rq, rf);
>  
>  	if (src_rq == dst_rq) {
>  		raw_spin_rq_unlock(rq);
>  		raw_spin_rq_lock(dst_rq);
>  	} else if (rq == src_rq) {
>  		double_lock_balance(rq, dst_rq);
> -		rq_repin_lock(rq, rf);
> +		if (rf)
> +			rq_repin_lock(rq, rf);
>  	} else if (rq == dst_rq) {
>  		double_lock_balance(rq, src_rq);
> -		rq_repin_lock(rq, rf);
> +		if (rf)
> +			rq_repin_lock(rq, rf);

It feels kind of weird to have the callee need to know about pinning
requirements in the caller instead of vice versa. I mean, I guess it's inherent
to doing an unpin / repin inside of a locked region, but it'd be nice if we
could minimize the amount of variance in that codepath regardless. I think what
you have is correct, but maybe it'd simpler if we just pinned in the caller on
the enqueue path? That way the semantics of when locks can be dropped is
consistent in dispatch_to_local_dsq().

>  	} else {
>  		raw_spin_rq_unlock(rq);
>  		double_rq_lock(src_rq, dst_rq);
> @@ -2072,7 +2075,7 @@ static void dispatch_to_local_dsq_lock(struct rq *rq, struct rq_flags *rf,
>  /**
>   * dispatch_to_local_dsq_unlock - Undo dispatch_to_local_dsq_lock()
>   * @rq: current rq which is locked
> - * @rf: rq_flags to use when unlocking @rq
> + * @rf: optional rq_flags to use when unlocking @rq if its lock is pinned
>   * @src_rq: rq to move task from
>   * @dst_rq: rq to move task to
>   *
> @@ -2084,7 +2087,8 @@ static void dispatch_to_local_dsq_unlock(struct rq *rq, struct rq_flags *rf,
>  	if (src_rq == dst_rq) {
>  		raw_spin_rq_unlock(dst_rq);
>  		raw_spin_rq_lock(rq);
> -		rq_repin_lock(rq, rf);
> +		if (rf)
> +			rq_repin_lock(rq, rf);
>  	} else if (rq == src_rq) {
>  		double_unlock_balance(rq, dst_rq);
>  	} else if (rq == dst_rq) {
> @@ -2092,7 +2096,8 @@ static void dispatch_to_local_dsq_unlock(struct rq *rq, struct rq_flags *rf,
>  	} else {
>  		double_rq_unlock(src_rq, dst_rq);
>  		raw_spin_rq_lock(rq);
> -		rq_repin_lock(rq, rf);
> +		if (rf)
> +			rq_repin_lock(rq, rf);
>  	}
>  }
>  #endif	/* CONFIG_SMP */
> @@ -2214,7 +2219,7 @@ enum dispatch_to_local_dsq_ret {
>  /**
>   * dispatch_to_local_dsq - Dispatch a task to a local dsq
>   * @rq: current rq which is locked
> - * @rf: rq_flags to use when unlocking @rq
> + * @rf: optional rq_flags to use when unlocking @rq if its lock is pinned
>   * @dsq_id: destination dsq ID
>   * @p: task to dispatch
>   * @enq_flags: %SCX_ENQ_*
> -- 
> 2.45.2
> 

Download attachment "signature.asc" of type "application/pgp-signature" (229 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ