[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150626021515.GA5700@redhat.com>
Date: Fri, 26 Jun 2015 04:15:15 +0200
From: Oleg Nesterov <oleg@...hat.com>
To: Peter Zijlstra <peterz@...radead.org>, paulmck@...ux.vnet.ibm.com,
tj@...nel.org, mingo@...hat.com, der.herr@...r.at,
dave@...olabs.net, riel@...hat.com, viro@...IV.linux.org.uk,
torvalds@...ux-foundation.org
Cc: linux-kernel@...r.kernel.org
Subject: [RFC PATCH 3/6] stop_machine: introduce stop_work_alloc() and
stop_work_free()
A separate and intentionally suboptimal patch to simplify the review of
this and the next changes.
And the new helpers, stop_work_alloc(cpumask) and stop_work_free(cpumask),
should be called if the caller is going to use cpu_stopper->stop_work's.
Note that 2 callers can never deadlock even if their cpumask's overlap,
they always "lock" cpu_stopper->stop_owner's in the same order as if we
had another per-cpu mutex.
This is obviously greatly inefficient, this will be fixed later.
Signed-off-by: Oleg Nesterov <oleg@...hat.com>
---
kernel/stop_machine.c | 50 +++++++++++++++++++++++++++++++++++++++++++++++++
1 files changed, 50 insertions(+), 0 deletions(-)
diff --git a/kernel/stop_machine.c b/kernel/stop_machine.c
index 6212208..3d5d810 100644
--- a/kernel/stop_machine.c
+++ b/kernel/stop_machine.c
@@ -42,11 +42,61 @@ struct cpu_stopper {
struct list_head works; /* list of pending works */
struct cpu_stop_work stop_work; /* for stop_cpus */
+ struct task_struct *stop_owner;
};
static DEFINE_PER_CPU(struct cpu_stopper, cpu_stopper);
static bool stop_machine_initialized = false;
+static DECLARE_WAIT_QUEUE_HEAD(stop_work_wq);
+
+static void stop_work_free_one(int cpu)
+{
+ struct cpu_stopper *stopper = &per_cpu(cpu_stopper, cpu);
+ /* Can be NULL if stop_work_alloc(wait => false) fails */
+ if (likely(stopper->stop_owner == current))
+ stopper->stop_owner = NULL;
+}
+
+static void stop_work_free(const struct cpumask *cpumask)
+{
+ int cpu;
+
+ for_each_cpu(cpu, cpumask)
+ stop_work_free_one(cpu);
+ wake_up_all(&stop_work_wq);
+}
+
+static struct cpu_stop_work *stop_work_alloc_one(int cpu, bool wait)
+{
+ struct cpu_stopper *stopper = &per_cpu(cpu_stopper, cpu);
+
+ if (cmpxchg(&stopper->stop_owner, NULL, current) == NULL)
+ goto done;
+
+ if (!wait)
+ return NULL;
+
+ __wait_event(stop_work_wq,
+ cmpxchg(&stopper->stop_owner, NULL, current) == NULL);
+done:
+ return &stopper->stop_work;
+}
+
+static bool stop_work_alloc(const struct cpumask *cpumask, bool wait)
+{
+ int cpu;
+
+ for_each_cpu(cpu, cpumask) {
+ if (stop_work_alloc_one(cpu, wait))
+ continue;
+ stop_work_free(cpumask);
+ return false;
+ }
+
+ return true;
+}
+
/*
* Avoids a race between stop_two_cpus and global stop_cpus, where
* the stoppers could get queued up in reverse order, leading to
--
1.5.5.1
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists