lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 26 Apr 2019 17:03:37 +0200
From:   Peter Zijlstra <peterz@...radead.org>
To:     Vineeth Remanan Pillai <vpillai@...italocean.com>
Cc:     Nishanth Aravamudan <naravamudan@...italocean.com>,
        Julien Desfossez <jdesfossez@...italocean.com>,
        Tim Chen <tim.c.chen@...ux.intel.com>, mingo@...nel.org,
        tglx@...utronix.de, pjt@...gle.com, torvalds@...ux-foundation.org,
        linux-kernel@...r.kernel.org, subhra.mazumdar@...cle.com,
        fweisbec@...il.com, keescook@...omium.org, kerrnel@...gle.com,
        Phil Auld <pauld@...hat.com>, Aaron Lu <aaron.lwe@...il.com>,
        Aubrey Li <aubrey.intel@...il.com>,
        Valentin Schneider <valentin.schneider@....com>,
        Mel Gorman <mgorman@...hsingularity.net>,
        Pawan Gupta <pawan.kumar.gupta@...ux.intel.com>,
        Paolo Bonzini <pbonzini@...hat.com>
Subject: Re: [RFC PATCH v2 16/17] sched: Wake up sibling if it has something
 to run

On Tue, Apr 23, 2019 at 04:18:21PM +0000, Vineeth Remanan Pillai wrote:

(you lost From: Julien)

> During core scheduling, it can happen that the current rq selects a
> non-tagged process while the sibling might be idling even though it
> had something to run (because the sibling selected idle to match the
> tagged process in previous tag matching iteration). We need to wake up
> the sibling if such a situation arise.
> 
> Signed-off-by: Vineeth Remanan Pillai <vpillai@...italocean.com>
> Signed-off-by: Julien Desfossez <jdesfossez@...italocean.com>
> ---
>  kernel/sched/core.c | 15 +++++++++++++++
>  1 file changed, 15 insertions(+)
> 
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index e8f5ec641d0a..0e3c51a1b54a 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -3775,6 +3775,21 @@ pick_next_task(struct rq *rq, struct task_struct *prev, struct rq_flags *rf)
>  			 */
>  			if (i == cpu && !rq->core->core_cookie && !p->core_cookie) {
>  				next = p;
> +				rq->core_pick = NULL;
> + 
> +				/*
> +				 * If the sibling is idling, we might want to wake it
> +				 * so that it can check for any runnable tasks that did
> +				 * not get a chance to run due to previous task matching.
> +				 */
> +				for_each_cpu(j, smt_mask) {
> +					struct rq *rq_j = cpu_rq(j);
> +					rq_j->core_pick = NULL;
> +					if (j != cpu &&
> +					    is_idle_task(rq_j->curr) && rq_j->nr_running) {
> +						resched_curr(rq_j);
> +					}
> +				}
>  				goto done;
>  			}

Anyway, as written here:

  https://lkml.kernel.org/r/20190410150116.GI2490@worktop.programming.kicks-ass.net

I think this isn't quite right. Does the below patch (which actually
removes lines) also work?

As written before; the intent was to not allow that optimization if the
last pick had a cookie; thereby doing a (last) core wide selection when
we go to a 0-cookie, and this then includes kicking forced-idle cores.

---
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -3574,18 +3574,7 @@ static struct task_struct *
 pick_task(struct rq *rq, const struct sched_class *class, struct task_struct *max)
 {
 	struct task_struct *class_pick, *cookie_pick;
-	unsigned long cookie = 0UL;
-
-	/*
-	 * We must not rely on rq->core->core_cookie here, because we fail to reset
-	 * rq->core->core_cookie on new picks, such that we can detect if we need
-	 * to do single vs multi rq task selection.
-	 */
-
-	if (max && max->core_cookie) {
-		WARN_ON_ONCE(rq->core->core_cookie != max->core_cookie);
-		cookie = max->core_cookie;
-	}
+	unsigned long cookie = rq->core->core_cookie;
 
 	class_pick = class->pick_task(rq);
 	if (!cookie)
@@ -3612,6 +3601,7 @@ pick_next_task(struct rq *rq, struct tas
 	struct task_struct *next, *max = NULL;
 	const struct sched_class *class;
 	const struct cpumask *smt_mask;
+	unsigned long prev_cookie;
 	int i, j, cpu;
 
 	if (!sched_core_enabled(rq))
@@ -3653,7 +3643,10 @@ pick_next_task(struct rq *rq, struct tas
 	 */
 	rq->core->core_task_seq++;
 
+	prev_cookie = rq->core->core_cookie;
+
 	/* reset state */
+	rq->core->core_cookie = 0UL;
 	for_each_cpu(i, smt_mask) {
 		struct rq *rq_i = cpu_rq(i);
 
@@ -3688,7 +3681,7 @@ pick_next_task(struct rq *rq, struct tas
 				 * If there weren't no cookies; we don't need
 				 * to bother with the other siblings.
 				 */
-				if (i == cpu && !rq->core->core_cookie)
+				if (i == cpu && !prev_cookie)
 					goto next_class;
 
 				continue;
@@ -3698,7 +3691,7 @@ pick_next_task(struct rq *rq, struct tas
 			 * Optimize the 'normal' case where there aren't any
 			 * cookies and we don't need to sync up.
 			 */
-			if (i == cpu && !rq->core->core_cookie && !p->core_cookie) {
+			if (i == cpu && !prev_cookie && !p->core_cookie) {
 				next = p;
 				goto done;
 			}

Powered by blists - more mailing lists