lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sat, 17 Jun 2017 08:11:49 -0400
From:   Tejun Heo <tj@...nel.org>
To:     Ingo Molnar <mingo@...hat.com>,
        Peter Zijlstra <peterz@...radead.org>
Cc:     "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
        Steven Rostedt <rostedt@...dmis.org>,
        linux-kernel@...r.kernel.org,
        Lai Jiangshan <jiangshanlai@...il.com>, kernel-team@...com
Subject: Re: simple repro case

Here's a simple rerpo.  The test code runs whenever a CPU goes
off/online.  The test kthread is created on a different CPU and
migrated to the target CPU while running.  Without the previous patch
applied, the kthread ends up running on the wrong CPU.

Thanks.

diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index c74bf39ef764..faed30edbb21 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -4648,12 +4648,56 @@ int workqueue_prepare_cpu(unsigned int cpu)
 	return 0;
 }
 
+#include <linux/delay.h>
+
+static int test_last_cpu = -1;
+
+static int test_kthread_migration_threadfn(void *data)
+{
+	while (!kthread_should_stop()) {
+		test_last_cpu = raw_smp_processor_id();
+		cond_resched();
+	}
+	return 0;
+}
+
+static void test_kthread_migration(int inactive_cpu)
+{
+	int start_cpu = cpumask_any(cpu_online_mask);
+	struct task_struct *task;
+
+	printk("TEST: cpu %d inactive, starting on %d and migrating (active/online=%*pbl/%*pbl)\n",
+	       inactive_cpu, start_cpu, cpumask_pr_args(cpu_active_mask),
+	       cpumask_pr_args(cpu_online_mask));
+
+	task = kthread_create(test_kthread_migration_threadfn, NULL, "test");
+	if (IS_ERR(task)) {
+		printk("TEST: kthread_create failed with %ld\n", PTR_ERR(task));
+		return;
+	}
+
+	kthread_bind(task, start_cpu);
+	wake_up_process(task);
+	msleep(100);
+	printk("TEST: test_last_cpu=%d cpus_allowed=%*pbl\n",
+	       test_last_cpu, cpumask_pr_args(&task->cpus_allowed));
+	printk("TEST: migrating to inactve cpu %d\n", inactive_cpu);
+	set_cpus_allowed_ptr(task, cpumask_of(inactive_cpu));
+	msleep(100);
+	printk("TEST: test_last_cpu=%d cpus_allowed=%*pbl\n",
+	       test_last_cpu, cpumask_pr_args(&task->cpus_allowed));
+	kthread_stop(task);
+	return;
+}
+
 int workqueue_online_cpu(unsigned int cpu)
 {
 	struct worker_pool *pool;
 	struct workqueue_struct *wq;
 	int pi;
 
+	test_kthread_migration(cpu);
+
 	mutex_lock(&wq_pool_mutex);
 
 	for_each_pool(pool, pi) {
@@ -4680,6 +4724,8 @@ int workqueue_offline_cpu(unsigned int cpu)
 	struct work_struct unbind_work;
 	struct workqueue_struct *wq;
 
+	test_kthread_migration(cpu);
+
 	/* unbinding per-cpu workers should happen on the local CPU */
 	INIT_WORK_ONSTACK(&unbind_work, wq_unbind_fn);
 	queue_work_on(cpu, system_highpri_wq, &unbind_work);

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ