lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <1462767091-1215-1-git-send-email-xlpang@redhat.com>
Date:	Mon,  9 May 2016 12:11:31 +0800
From:	Xunlei Pang <xlpang@...hat.com>
To:	linux-kernel@...r.kernel.org
Cc:	Peter Zijlstra <peterz@...radead.org>,
	Juri Lelli <juri.lelli@....com>,
	Ingo Molnar <mingo@...hat.com>,
	Steven Rostedt <rostedt@...dmis.org>,
	Xunlei Pang <xlpang@...hat.com>
Subject: [PATCH] sched/rt/deadline: Don't push if task's scheduling class was changed

We got a warning below:
    WARNING: CPU: 1 PID: 2468 at kernel/sched/core.c:1161 set_task_cpu+0x1af/0x1c0
    CPU: 1 PID: 2468 Comm: bugon Not tainted 4.6.0-rc3+ #16
    Hardware name: Intel Corporation Broadwell Client
    0000000000000086 0000000089618374 ffff8800897a7d50 ffffffff8133dc8c
    0000000000000000 0000000000000000 ffff8800897a7d90 ffffffff81089921
    0000048981037f39 ffff88016c4315c0 ffff88016ecd6e40 0000000000000000
    Call Trace:
    [<ffffffff8133dc8c>] dump_stack+0x63/0x87
    [<ffffffff81089921>] __warn+0xd1/0xf0
    [<ffffffff81089a5d>] warn_slowpath_null+0x1d/0x20
    [<ffffffff810b48ff>] set_task_cpu+0x1af/0x1c0
    [<ffffffff810cc90a>] push_dl_task.part.34+0xea/0x180
    [<ffffffff810ccd17>] push_dl_tasks+0x17/0x30
    [<ffffffff8118d17a>] __balance_callback+0x45/0x5c
    [<ffffffff810b2f46>] __sched_setscheduler+0x906/0xb90
    [<ffffffff810b6f50>] SyS_sched_setattr+0x150/0x190
    [<ffffffff81003c12>] do_syscall_64+0x62/0x110
    [<ffffffff816b5021>] entry_SYSCALL64_slow_path+0x25/0x25

The corresponding warning triggering code:
    WARN_ON_ONCE(p->state == TASK_RUNNING &&
             p->sched_class == &fair_sched_class &&
             (p->on_rq && !task_on_rq_migrating(p)))

This is because in find_lock_later_rq(), the task whose scheduling
class was changed to fair class is still pushed away as deadline.

So, check in find_lock_later_rq() after double_lock_balance(), if the
scheduling class of the deadline task was changed, break and retry.
Apply the same logic to RT.

Signed-off-by: Xunlei Pang <xlpang@...hat.com>
---
 kernel/sched/deadline.c | 1 +
 kernel/sched/rt.c       | 1 +
 2 files changed, 2 insertions(+)

diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index 169d40d..57eb3e4 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -1385,6 +1385,7 @@ static struct rq *find_lock_later_rq(struct task_struct *task, struct rq *rq)
 				     !cpumask_test_cpu(later_rq->cpu,
 				                       &task->cpus_allowed) ||
 				     task_running(rq, task) ||
+				     !dl_task(task) ||
 				     !task_on_rq_queued(task))) {
 				double_unlock_balance(rq, later_rq);
 				later_rq = NULL;
diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index ecfc83d..c10a6f5 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -1720,6 +1720,7 @@ static struct rq *find_lock_lowest_rq(struct task_struct *task, struct rq *rq)
 				     !cpumask_test_cpu(lowest_rq->cpu,
 						       tsk_cpus_allowed(task)) ||
 				     task_running(rq, task) ||
+				     !rt_task(task) ||
 				     !task_on_rq_queued(task))) {
 
 				double_unlock_balance(rq, lowest_rq);
-- 
1.8.3.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ