lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <1216937517.5368.11.camel@earth>
Date:	Fri, 25 Jul 2008 00:11:57 +0200
From:	Dmitry Adamushko <dmitry.adamushko@...il.com>
To:	Ingo Molnar <mingo@...e.hu>
Cc:	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	LKML <linux-kernel@...r.kernel.org>
Subject: [patch, rfc: 1/2] sched, hotplug: safe use of rq->migration_thread
	and find_busiest_queue()


From: Dmitry Adamushko <dmitry.adamushko@...il.com>
Subject: sched, hotplug: safe use of rq->migration_thread
and find_busiest_queue()

---

    sched, hotplug: safe use of rq->migration_thread and find_busiest_queue()
    
    (1) make usre rq->migration_thread is valid when we access it in set_cpus_allowed_ptr()
    after releasing the rq-lock;
    
    (2) in load_balance() and load_balance_idle()
    
    ensure that we don't get 'busiest' which can disappear as a result of cpu_down()
    while we are manipulating it. For this goal, we choose 'busiest' only amongst
    'cpu_active_map' cpus.
    
    load_balance() and load_balance_idle() get called with preemption being disabled
    so synchronize_sched() in cpu_down() should get us synced.
    
    IOW, as soon as synchronize_sched() has been done in cpu_down(cpu), the run-queue for
    can't be manipulated/accessed by the load-balancer.
    
    Signed-off-by: Dmitry Adamushko <dmitry.adamushko@...il.com>

diff --git a/kernel/sched.c b/kernel/sched.c
index 6acf749..b4ccc8b 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -3409,7 +3409,14 @@ static int load_balance(int this_cpu, struct rq *this_rq,
 	struct rq *busiest;
 	unsigned long flags;
 
-	cpus_setall(*cpus);
+	/*
+	 * Ensure that we don't get 'busiest' which can disappear
+	 * as a result of cpu_down() while we are manipulating it.
+	 *
+	 * load_balance() gets called with preemption being disabled
+	 * so synchronize_sched() in cpu_down() should get us synced.
+	 */
+	*cpus = cpu_active_map;
 
 	/*
 	 * When power savings policy is enabled for the parent domain, idle
@@ -3571,7 +3578,14 @@ load_balance_newidle(int this_cpu, struct rq *this_rq, struct sched_domain *sd,
 	int sd_idle = 0;
 	int all_pinned = 0;
 
-	cpus_setall(*cpus);
+	/*
+	 * Ensure that we don't get 'busiest' which can disappear
+	 * as a result of cpu_down() while we are manipulating it.
+	 *
+	 * load_balance_newidle() gets called with preemption being disabled
+	 * so synchronize_sched() in cpu_down() should get us synced.
+	 */
+	*cpus = cpu_active_map;
 
 	/*
 	 * When power savings policy is enabled for the parent domain, idle
@@ -5764,9 +5778,14 @@ int set_cpus_allowed_ptr(struct task_struct *p, const cpumask_t *new_mask)
 		goto out;
 
 	if (migrate_task(p, any_online_cpu(*new_mask), &req)) {
-		/* Need help from migration thread: drop lock and wait. */
+		/* Need to wait for migration thread (might exit: take ref). */
+		struct task_struct *mt = rq->migration_thread;
+
+		get_task_struct(mt);
 		task_rq_unlock(rq, &flags);
-		wake_up_process(rq->migration_thread);
+		wake_up_process(mt);
+		put_task_struct(mt);
+
 		wait_for_completion(&req.done);
 		tlb_migrate_finish(p->mm);
 		return 0;


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ