lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 2 Sep 2015 11:33:15 +0900
From:	Byungchul Park <byungchul.park@....com>
To:	Peter Zijlstra <peterz@...radead.org>
Cc:	mingo@...nel.org, linux-kernel@...r.kernel.org,
	yuyang.du@...el.com, efault@....de, tglx@...utronix.de
Subject: Re: [PATCH v4 5/5] sched: add two functions for att(det)aching a
 task to(from) a cfs_rq

On Tue, Sep 01, 2015 at 05:03:43PM +0200, Peter Zijlstra wrote:
> On Tue, Sep 01, 2015 at 09:28:49AM +0900, Byungchul Park wrote:
> 
> > check the condition "!(flags & DEQUEUE_SLEEP)" for doing normalizing in
> > dequeue_entity(). i think you have to keep my original comment, or
> > modify your comment to something like below.
> > 
> > before - If it's !queued, sleeping tasks have a normalized vruntime,
> > after - If it's !queued, sleeping tasks have a non-normalize vruntime,
> > 
> > but.. i think it would be better that you keep my original comment..
> 
> The comment we can talk about later, but I think the condition:
> 
> > > -	if (p->state == TASK_RUNNING)
> > > +	if (!p->se.on_rq)
> 
> is important now. Both are broken in different ways.
> 
> 	p->state == TASK_RUNNING
> 
> is broken in this scenario:
> 
> 	CPU0					CPU1
> 
> 	set_current_state(TASK_UNINTERRUPTIBLE);

here, the target task has not been dequeued yet (the task will be 
dequeued within schedule()). so, queued is true when sched_move_task()
is called. sched_move_task()->dequeue_task(flag=0) will normalize as
we expect. that is anyway no problem. however, i also think the 
condition here can make us confused, too.

i think "p->state == TASK_RUNNING" condition can work anyway, which
already exists in original code. the condition is only for checking
if the task is being migrated or in sleep. how about this appoach
below to make code more readable?

> 
> 						sched_move_task()
> 						  task_move_group_fair()
> 						    vruntime_normalized() == true
> 	if (!cond)
> 		schedule();
> 	__set_current_state(TASK_RUNNING);
> 
> 
> Now the proposed replacement:
> 
> 	!p->se.on_rq
> 
> is equally broken, because (as you point out) clearing it isn't
> conditional on DEQUEUE_SLEEP.
> 

i think we must take a task's on_rq into account instead of se's on_rq,
because current code set se's on_rq to 0 even when migrating a task. but
task's on_rq would be set to TASK_ON_RQ_MIGRATING when migrating the
task. there it would be ok if we use task's on_rq to check if it has
a normailzed vruntime or not like below.

---

>From 9545e34c2f6fd195f18d3111b5c1d1b03600cdcd Mon Sep 17 00:00:00 2001
From: Byungchul Park <byungchul.park@....com>
Date: Wed, 2 Sep 2015 10:59:58 +0900
Subject: [PATCH] sched: make vruntime_normalized() cleaner

in both TASK_ON_RQ_QUEUED case and TASK_ON_RQ_MIGRATING case, the
target task have a normalized vruntime by dequeue_entity(.flags=0).
so we don't need to check this separately with a different condition
statement.

Signed-off-by: Byungchul Park <byungchul.park@....com>
---
 kernel/sched/fair.c |   16 ++++------------
 1 file changed, 4 insertions(+), 12 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 53d0e30..dfe8754 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -7920,14 +7920,14 @@ prio_changed_fair(struct rq *rq, struct task_struct *p, int oldprio)
 
 static inline bool vruntime_normalized(struct task_struct *p)
 {
-	int queued = task_on_rq_queued(p);
 	struct sched_entity *se = &p->se;
 
 	/*
-	 * If it's queued, then the dequeue_entity(.flags=0) will already
-	 * have normalized the vruntime.
+	 * In both TASK_ON_RQ_QUEUED case and TASK_ON_RQ_MIGRATING case,
+	 * the dequeue_entity(.flags=0) will already have normalized the
+	 * vruntime.
 	 */
-	if (queued)
+	if (p->on_rq)
 		return true;
 
 	/*
@@ -7942,14 +7942,6 @@ static inline bool vruntime_normalized(struct task_struct *p)
 	if (!se->sum_exec_runtime || p->state == TASK_WAKING)
 		return true;
 
-	/*
-	 * If it's !queued, then only when the task is sleeping it has a
-	 * non-normalized vruntime, that is, when the task is being migrated
-	 * it has a normailized vruntime.
-	 */
-	if (p->state == TASK_RUNNING)
-		return true;
-
 	return false;
 }
 
-- 
1.7.9.5

> 
> And the problem with tracking the vruntime state is that while it helps
> detach_task_cfs_rq(), attach_task_cfs_rq() is still left wondering what
> it should return to.
> 
> So we do indeed need something to determine, based on the current state,
> if vruntime should be normalized.
> 
> /me ponders moar
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@...r.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ