[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1346259120-6216-5-git-send-email-laijs@cn.fujitsu.com>
Date: Thu, 30 Aug 2012 00:51:55 +0800
From: Lai Jiangshan <laijs@...fujitsu.com>
To: Tejun Heo <tj@...nel.org>, linux-kernel@...r.kernel.org
Cc: Lai Jiangshan <laijs@...fujitsu.com>
Subject: [PATCH 4/9 V3] workqueue: add non_manager_role_manager_mutex_unlock()
If hotplug code grabbed the manager_mutex and worker_thread try to create
a worker, the manage_worker() will return false and worker_thread go to
process work items. Now, on the CPU, all workers are processing work items,
no idle_worker left/ready for managing. It breaks the concept of workqueue
and it is bug.
So when this case happens, the last idle should not go to process work,
it should go to sleep as usual and wait normal events. but it should
also be notified by the event that hotplug code release the manager_mutex.
So we add non_manager_role_manager_mutex_unlock() to do this notify.
By the way, if manager_mutex is grabbed by a real manager,
POOL_MANAGING_WORKERS will be set, the last idle can go to process work.
Signed-off-by: Lai Jiangshan <laijs@...fujitsu.com>
---
kernel/workqueue.c | 42 ++++++++++++++++++++++++++++++++++--------
1 files changed, 34 insertions(+), 8 deletions(-)
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 0673598..e40898a 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -1305,6 +1305,24 @@ __acquires(&gcwq->lock)
}
}
+/*
+ * Release pool->manager_mutex which grabbed by current thread but manager
+ *
+ * Current thread has held the manager_mutex and it may caused
+ * the worker_thread who tried to create worker go to sleep,
+ * wake one and let it try to create worker again or proccess work.
+ *
+ * CONTEXT:
+ * spin_lock_irq(gcwq->lock).
+ */
+static void non_manager_role_manager_mutex_unlock(struct worker_pool *pool)
+{
+ mutex_unlock(&pool->manager_mutex);
+
+ if (need_more_worker(pool))
+ wake_up_worker(pool);
+}
+
struct idle_rebind {
int idle_cnt; /* # idle workers to be rebound */
struct completion idle_done; /* all idle workers rebound */
@@ -2136,11 +2154,12 @@ woke_up:
recheck:
/* no more worker necessary? */
if (!need_more_worker(pool))
- goto sleep;
+ goto manage;
/* do we need to manage? */
- if (unlikely(!may_start_working(pool)) && manage_workers(worker))
- goto recheck;
+ if (unlikely(!may_start_working(pool)) &&
+ !(pool->flags & POOL_MANAGING_WORKERS))
+ goto manage;
/*
* ->scheduled list can only be filled while a worker is
@@ -2173,13 +2192,20 @@ recheck:
} while (keep_working(pool));
worker_set_flags(worker, WORKER_PREP, false);
-sleep:
+manage:
if (unlikely(need_to_manage_workers(pool)) && manage_workers(worker))
goto recheck;
/*
- * gcwq->lock is held and there's no work to process and no
- * need to manage, sleep. Workers are woken up only while
+ * gcwq->lock is held and it will leave when one of these cases:
+ * case1) there's no work to process and no need to manage, sleep.
+ * case2) there are works to process but it is the last idle and
+ * failed to grab manager_lock to create worker. also sleep!
+ * current manager_lock owner will wake up it to
+ * process work or do manage.
+ * See non_manager_role_manager_mutex_unlock().
+ *
+ * Workers are woken up only while
* holding gcwq->lock or from local cpu, so setting the
* current state before releasing gcwq->lock is enough to
* prevent losing any event.
@@ -3419,9 +3445,9 @@ static void gcwq_release_management_and_unlock(struct global_cwq *gcwq)
{
struct worker_pool *pool;
- spin_unlock_irq(&gcwq->lock);
for_each_worker_pool(pool, gcwq)
- mutex_unlock(&pool->manager_mutex);
+ non_manager_role_manager_mutex_unlock(pool);
+ spin_unlock_irq(&gcwq->lock);
}
static void gcwq_unbind_fn(struct work_struct *work)
--
1.7.4.4
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists