lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Thu, 21 Dec 2023 16:12:57 +0000
From: Metin Kaya <metin.kaya@....com>
To: John Stultz <jstultz@...gle.com>, LKML <linux-kernel@...r.kernel.org>
Cc: Joel Fernandes <joelaf@...gle.com>, Qais Yousef <qyousef@...gle.com>,
 Ingo Molnar <mingo@...hat.com>, Peter Zijlstra <peterz@...radead.org>,
 Juri Lelli <juri.lelli@...hat.com>,
 Vincent Guittot <vincent.guittot@...aro.org>,
 Dietmar Eggemann <dietmar.eggemann@....com>,
 Valentin Schneider <vschneid@...hat.com>,
 Steven Rostedt <rostedt@...dmis.org>, Ben Segall <bsegall@...gle.com>,
 Zimuzo Ezeozue <zezeozue@...gle.com>, Youssef Esmat
 <youssefesmat@...gle.com>, Mel Gorman <mgorman@...e.de>,
 Daniel Bristot de Oliveira <bristot@...hat.com>,
 Will Deacon <will@...nel.org>, Waiman Long <longman@...hat.com>,
 Boqun Feng <boqun.feng@...il.com>, "Paul E. McKenney" <paulmck@...nel.org>,
 Xuewen Yan <xuewen.yan94@...il.com>, K Prateek Nayak
 <kprateek.nayak@....com>, Thomas Gleixner <tglx@...utronix.de>,
 kernel-team@...roid.com
Subject: Re: [PATCH v7 14/23] sched: Handle blocked-waiter migration (and
 return migration)

On 20/12/2023 12:18 am, John Stultz wrote:
> Add logic to handle migrating a blocked waiter to a remote
> cpu where the lock owner is runnable.
> 
> Additionally, as the blocked task may not be able to run
> on the remote cpu, add logic to handle return migration once
> the waiting task is given the mutex.
> 
> Because tasks may get migrated to where they cannot run,
> this patch also modifies the scheduling classes to avoid
> sched class migrations on mutex blocked tasks, leaving
> proxy() to do the migrations and return migrations.

s/proxy/find_proxy_task/

> 
> This was split out from the larger proxy patch, and
> significantly reworked.
> 
> Credits for the original patch go to:
>    Peter Zijlstra (Intel) <peterz@...radead.org>
>    Juri Lelli <juri.lelli@...hat.com>
>    Valentin Schneider <valentin.schneider@....com>
>    Connor O'Brien <connoro@...gle.com>
> 
> Cc: Joel Fernandes <joelaf@...gle.com>
> Cc: Qais Yousef <qyousef@...gle.com>
> Cc: Ingo Molnar <mingo@...hat.com>
> Cc: Peter Zijlstra <peterz@...radead.org>
> Cc: Juri Lelli <juri.lelli@...hat.com>
> Cc: Vincent Guittot <vincent.guittot@...aro.org>
> Cc: Dietmar Eggemann <dietmar.eggemann@....com>
> Cc: Valentin Schneider <vschneid@...hat.com>
> Cc: Steven Rostedt <rostedt@...dmis.org>
> Cc: Ben Segall <bsegall@...gle.com>
> Cc: Zimuzo Ezeozue <zezeozue@...gle.com>
> Cc: Youssef Esmat <youssefesmat@...gle.com>
> Cc: Mel Gorman <mgorman@...e.de>
> Cc: Daniel Bristot de Oliveira <bristot@...hat.com>
> Cc: Will Deacon <will@...nel.org>
> Cc: Waiman Long <longman@...hat.com>
> Cc: Boqun Feng <boqun.feng@...il.com>
> Cc: "Paul E. McKenney" <paulmck@...nel.org>
> Cc: Metin Kaya <Metin.Kaya@....com>
> Cc: Xuewen Yan <xuewen.yan94@...il.com>
> Cc: K Prateek Nayak <kprateek.nayak@....com>
> Cc: Thomas Gleixner <tglx@...utronix.de>
> Cc: kernel-team@...roid.com
> Signed-off-by: John Stultz <jstultz@...gle.com>
> ---
> v6:
> * Integrated sched_proxy_exec() check in proxy_return_migration()
> * Minor cleanups to diff
> * Unpin the rq before calling __balance_callbacks()
> * Tweak proxy migrate to migrate deeper task in chain, to avoid
>    tasks pingponging between rqs
> v7:
> * Fixup for unused function arguments
> * Switch from that_rq -> target_rq, other minor tweaks, and typo
>    fixes suggested by Metin Kaya
> * Switch back to doing return migration in the ttwu path, which
>    avoids nasty lock juggling and performance issues
> * Fixes for UP builds
> ---
>   kernel/sched/core.c     | 161 ++++++++++++++++++++++++++++++++++++++--
>   kernel/sched/deadline.c |   2 +-
>   kernel/sched/fair.c     |   4 +-
>   kernel/sched/rt.c       |   9 ++-
>   4 files changed, 164 insertions(+), 12 deletions(-)
> 
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index 42e25bbdfe6b..55dc2a3b7e46 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -2981,8 +2981,15 @@ static int affine_move_task(struct rq *rq, struct task_struct *p, struct rq_flag
>   	struct set_affinity_pending my_pending = { }, *pending = NULL;
>   	bool stop_pending, complete = false;
>   
> -	/* Can the task run on the task's current CPU? If so, we're done */
> -	if (cpumask_test_cpu(task_cpu(p), &p->cpus_mask)) {
> +	/*
> +	 * Can the task run on the task's current CPU? If so, we're done
> +	 *
> +	 * We are also done if the task is selected, boosting a lock-
> +	 * holding proxy, (and potentially has been migrated outside its
> +	 * current or previous affinity mask)
> +	 */
> +	if (cpumask_test_cpu(task_cpu(p), &p->cpus_mask) ||
> +	    (task_current_selected(rq, p) && !task_current(rq, p))) {
>   		struct task_struct *push_task = NULL;
>   
>   		if ((flags & SCA_MIGRATE_ENABLE) &&
> @@ -3778,6 +3785,39 @@ static inline void ttwu_do_wakeup(struct task_struct *p)
>   	trace_sched_wakeup(p);
>   }
>   
> +#ifdef CONFIG_SMP
> +static inline bool proxy_needs_return(struct rq *rq, struct task_struct *p)
> +{
> +	if (!sched_proxy_exec())
> +		return false;
> +
> +	if (task_current(rq, p))
> +		return false;
> +
> +	if (p->blocked_on && p->blocked_on_state == BO_WAKING) {
> +		raw_spin_lock(&p->blocked_lock);
> +		if (!is_cpu_allowed(p, cpu_of(rq))) {
> +			if (task_current_selected(rq, p)) {
> +				put_prev_task(rq, p);
> +				rq_set_selected(rq, rq->idle);
> +			}
> +			deactivate_task(rq, p, DEQUEUE_SLEEP | DEQUEUE_NOCLOCK);
> +			resched_curr(rq);
> +			raw_spin_unlock(&p->blocked_lock);
> +			return true;
> +		}
> +		resched_curr(rq);
> +		raw_spin_unlock(&p->blocked_lock);

Do we need to hold blocked_lock while checking allowed CPUs? Would 
moving raw_spin_lock(&p->blocked_lock); inside if (!is_cpu_allowed()) 
block be silly? i.e.,:

		if (!is_cpu_allowed(p, cpu_of(rq))) {
			raw_spin_lock(&p->blocked_lock);
			if (task_current_selected(rq, p)) {
				put_prev_task(rq, p);
				rq_set_selected(rq, rq->idle);
			}
			deactivate_task(rq, p, DEQUEUE_SLEEP | DEQUEUE_NOCLOCK);
			resched_curr(rq);
			raw_spin_unlock(&p->blocked_lock);
			return true;
		}
		resched_curr(rq);

> +	}
> +	return false;
> +}
> +#else /* !CONFIG_SMP */
> +static inline bool proxy_needs_return(struct rq *rq, struct task_struct *p)
> +{
> +	return false;
> +}
> +#endif /*CONFIG_SMP */

Nit: what about this?

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 30dfb6f14f2b..60b542a6faa5 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -4027,9 +4027,11 @@ static inline void 
activate_blocked_entities(struct rq *target_rq,
  }
  #endif /* CONFIG_SCHED_PROXY_EXEC */

-#ifdef CONFIG_SMP
  static inline bool proxy_needs_return(struct rq *rq, struct 
task_struct *p)
  {
+	if (!IS_ENABLED(CONFIG_SMP))
+		return false;
+
  	if (!sched_proxy_exec())
  		return false;

@@ -4053,12 +4055,6 @@ static inline bool proxy_needs_return(struct rq 
*rq, struct task_struct *p)
  	}
  	return false;
  }
-#else /* !CONFIG_SMP */
-static inline bool proxy_needs_return(struct rq *rq, struct task_struct *p)
-{
-	return false;
-}
-#endif /*CONFIG_SMP */

  static void
  ttwu_do_activate(struct rq *rq, struct task_struct *p, int wake_flags,

> +
>   static void
>   ttwu_do_activate(struct rq *rq, struct task_struct *p, int wake_flags,
>   		 struct rq_flags *rf)
> @@ -3870,9 +3910,12 @@ static int ttwu_runnable(struct task_struct *p, int wake_flags)
>   			update_rq_clock(rq);
>   			wakeup_preempt(rq, p, wake_flags);
>   		}
> +		if (proxy_needs_return(rq, p))
> +			goto out;
>   		ttwu_do_wakeup(p);
>   		ret = 1;
>   	}
> +out:
>   	__task_rq_unlock(rq, &rf);
>   
>   	return ret;
> @@ -4231,6 +4274,7 @@ int try_to_wake_up(struct task_struct *p, unsigned int state, int wake_flags)
>   	int cpu, success = 0;
>   
>   	if (p == current) {
> +		WARN_ON(task_is_blocked(p));
>   		/*
>   		 * We're waking current, this means 'p->on_rq' and 'task_cpu(p)
>   		 * == smp_processor_id()'. Together this means we can special
> @@ -6632,6 +6676,91 @@ static bool proxy_deactivate(struct rq *rq, struct task_struct *next)
>   	return true;
>   }
>   
> +#ifdef CONFIG_SMP
> +/*
> + * If the blocked-on relationship crosses CPUs, migrate @p to the
> + * owner's CPU.
> + *
> + * This is because we must respect the CPU affinity of execution
> + * contexts (owner) but we can ignore affinity for scheduling
> + * contexts (@p). So we have to move scheduling contexts towards
> + * potential execution contexts.
> + *
> + * Note: The owner can disappear, but simply migrate to @target_cpu
> + * and leave that CPU to sort things out.
> + */
> +static struct task_struct *

proxy_migrate_task() always returns NULL. Is return type really needed?

> +proxy_migrate_task(struct rq *rq, struct rq_flags *rf,
> +		   struct task_struct *p, int target_cpu)
> +{
> +	struct rq *target_rq;
> +	int wake_cpu;
> +

Having a "if (!IS_ENABLED(CONFIG_SMP))" check here would help in 
dropping #else part. i.e.,

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 30dfb6f14f2b..685ba6f2d4cd 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -6912,7 +6912,6 @@ proxy_resched_idle(struct rq *rq, struct 
task_struct *next)
  	return rq->idle;
  }

-#ifdef CONFIG_SMP
  /*
   * If the blocked-on relationship crosses CPUs, migrate @p to the
   * owner's CPU.
@@ -6932,6 +6931,9 @@ proxy_migrate_task(struct rq *rq, struct rq_flags *rf,
  	struct rq *target_rq;
  	int wake_cpu;

+	if (!IS_ENABLED(CONFIG_SMP))
+		return NULL;
+
  	lockdep_assert_rq_held(rq);
  	target_rq = cpu_rq(target_cpu);

@@ -6988,14 +6990,6 @@ proxy_migrate_task(struct rq *rq, struct rq_flags 
*rf,

  	return NULL; /* Retry task selection on _this_ CPU. */
  }
-#else /* !CONFIG_SMP */
-static struct task_struct *
-proxy_migrate_task(struct rq *rq, struct rq_flags *rf,
-		   struct task_struct *p, int target_cpu)
-{
-	return NULL;
-}
-#endif /* CONFIG_SMP */

  static void proxy_enqueue_on_owner(struct rq *rq, struct task_struct 
*owner,
  				   struct task_struct *next)

> +	lockdep_assert_rq_held(rq);
> +	target_rq = cpu_rq(target_cpu);
> +
> +	/*
> +	 * Since we're going to drop @rq, we have to put(@rq_selected) first,
> +	 * otherwise we have a reference that no longer belongs to us. Use
> +	 * @rq->idle to fill the void and make the next pick_next_task()
> +	 * invocation happy.
> +	 *
> +	 * CPU0				CPU1
> +	 *
> +	 *				B mutex_lock(X)
> +	 *
> +	 * A mutex_lock(X) <- B
> +	 * A __schedule()
> +	 * A pick->A
> +	 * A proxy->B
> +	 * A migrate A to CPU1
> +	 *				B mutex_unlock(X) -> A
> +	 *				B __schedule()
> +	 *				B pick->A
> +	 *				B switch_to (A)
> +	 *				A ... does stuff
> +	 * A ... is still running here
> +	 *
> +	 *		* BOOM *
> +	 */
> +	put_prev_task(rq, rq_selected(rq));
> +	rq_set_selected(rq, rq->idle);
> +	set_next_task(rq, rq_selected(rq));
> +	WARN_ON(p == rq->curr);
> +
> +	wake_cpu = p->wake_cpu;
> +	deactivate_task(rq, p, 0);
> +	set_task_cpu(p, target_cpu);
> +	/*
> +	 * Preserve p->wake_cpu, such that we can tell where it
> +	 * used to run later.
> +	 */
> +	p->wake_cpu = wake_cpu;
> +
> +	rq_unpin_lock(rq, rf);
> +	__balance_callbacks(rq);
> +
> +	raw_spin_rq_unlock(rq);
> +	raw_spin_rq_lock(target_rq);
> +
> +	activate_task(target_rq, p, 0);
> +	wakeup_preempt(target_rq, p, 0);
> +
> +	raw_spin_rq_unlock(target_rq);
> +	raw_spin_rq_lock(rq);
> +	rq_repin_lock(rq, rf);
> +
> +	return NULL; /* Retry task selection on _this_ CPU. */
> +}
> +#else /* !CONFIG_SMP */
> +static struct task_struct *
> +proxy_migrate_task(struct rq *rq, struct rq_flags *rf,
> +		   struct task_struct *p, int target_cpu)
> +{
> +	return NULL;
> +}
> +#endif /* CONFIG_SMP */ > +
>   /*
>    * Find who @next (currently blocked on a mutex) can proxy for.
>    *
> @@ -6654,8 +6783,11 @@ find_proxy_task(struct rq *rq, struct task_struct *next, struct rq_flags *rf)
>   	struct task_struct *owner = NULL;
>   	struct task_struct *ret = NULL;
>   	struct task_struct *p;
> +	int cur_cpu, target_cpu;
>   	struct mutex *mutex;
> -	int this_cpu = cpu_of(rq);
> +	bool curr_in_chain = false;
> +
> +	cur_cpu = cpu_of(rq);
>   
>   	/*
>   	 * Follow blocked_on chain.
> @@ -6686,17 +6818,27 @@ find_proxy_task(struct rq *rq, struct task_struct *next, struct rq_flags *rf)
>   			goto out;
>   		}
>   
> +		if (task_current(rq, p))
> +			curr_in_chain = true;
> +
>   		owner = __mutex_owner(mutex);
>   		if (!owner) {
>   			ret = p;
>   			goto out;
>   		}
>   
> -		if (task_cpu(owner) != this_cpu) {
> -			/* XXX Don't handle migrations yet */
> -			if (!proxy_deactivate(rq, next))
> -				ret = next;
> -			goto out;
> +		if (task_cpu(owner) != cur_cpu) {
> +			target_cpu = task_cpu(owner);
> +			/*
> +			 * @owner can disappear, simply migrate to @target_cpu and leave that CPU
> +			 * to sort things out.
> +			 */
> +			raw_spin_unlock(&p->blocked_lock);
> +			raw_spin_unlock(&mutex->wait_lock);
> +			if (curr_in_chain)
> +				return proxy_resched_idle(rq, next);
> +
> +			return proxy_migrate_task(rq, rf, p, target_cpu);
>   		}
>   
>   		if (task_on_rq_migrating(owner)) {
> @@ -6999,6 +7141,9 @@ static inline void sched_submit_work(struct task_struct *tsk)
>   	 */
>   	SCHED_WARN_ON(current->__state & TASK_RTLOCK_WAIT);
>   
> +	if (task_is_blocked(tsk))
> +		return;
> +
>   	/*
>   	 * If we are going to sleep and we have plugged IO queued,
>   	 * make sure to submit it to avoid deadlocks.
> diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
> index 9cf20f4ac5f9..4f998549ea74 100644
> --- a/kernel/sched/deadline.c
> +++ b/kernel/sched/deadline.c
> @@ -1705,7 +1705,7 @@ static void enqueue_task_dl(struct rq *rq, struct task_struct *p, int flags)
>   
>   	enqueue_dl_entity(&p->dl, flags);
>   
> -	if (!task_current(rq, p) && p->nr_cpus_allowed > 1)
> +	if (!task_current(rq, p) && p->nr_cpus_allowed > 1 && !task_is_blocked(p))
>   		enqueue_pushable_dl_task(rq, p);
>   }
>   
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 954b41e5b7df..8e3f118f6d6e 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -8372,7 +8372,9 @@ pick_next_task_fair(struct rq *rq, struct task_struct *prev, struct rq_flags *rf
>   		goto idle;
>   
>   #ifdef CONFIG_FAIR_GROUP_SCHED
> -	if (!prev || prev->sched_class != &fair_sched_class)
> +	if (!prev ||
> +	    prev->sched_class != &fair_sched_class ||
> +	    rq->curr != rq_selected(rq))
>   		goto simple;
>   
>   	/*
> diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
> index 81cd22eaa6dc..a7b51a021111 100644
> --- a/kernel/sched/rt.c
> +++ b/kernel/sched/rt.c
> @@ -1503,6 +1503,9 @@ enqueue_task_rt(struct rq *rq, struct task_struct *p, int flags)
>   	if (p->nr_cpus_allowed == 1)
>   		return;
>   
> +	if (task_is_blocked(p))
> +		return;
> +
>   	enqueue_pushable_task(rq, p);
>   }
>   
> @@ -1790,10 +1793,12 @@ static void put_prev_task_rt(struct rq *rq, struct task_struct *p)
>   
>   	update_rt_rq_load_avg(rq_clock_pelt(rq), rq, 1);
>   
> -	/* Avoid marking selected as pushable */
> -	if (task_current_selected(rq, p))
> +	/* Avoid marking current or selected as pushable */
> +	if (task_current(rq, p) || task_current_selected(rq, p))
>   		return;
>   
> +	if (task_is_blocked(p))
> +		return;
>   	/*
>   	 * The previous task needs to be made eligible for pushing
>   	 * if it is still active


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ