[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140516201225.GL5379@htj.dyndns.org>
Date: Fri, 16 May 2014 16:12:25 -0400
From: Tejun Heo <tj@...nel.org>
To: Frederic Weisbecker <fweisbec@...il.com>
Cc: LKML <linux-kernel@...r.kernel.org>,
Lai Jiangshan <laijs@...fujitsu.com>,
Christoph Lameter <cl@...ux.com>,
Kevin Hilman <khilman@...aro.org>,
Mike Galbraith <bitbucket@...ine.de>,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
Viresh Kumar <viresh.kumar@...aro.org>
Subject: Re: [PATCH 1/5] workqueue: Allow changing attributions of ordered
workqueues
Hello,
On Fri, May 16, 2014 at 06:16:51PM +0200, Frederic Weisbecker wrote:
> From: Lai Jiangshan <laijs@...fujitsu.com>
>
> Changing the attributions of a workqueue imply the addition of new pwqs
> to replace the old ones. But the current implementation doesn't handle
> ordered workqueues because they can't carry multi-pwqs without breaking
> ordering. Hence ordered workqueues currently aren't allowed to call
> apply_workqueue_attrs().
...
> Signed-off-by: Lai Jiangshan <laijs@...fujitsu.com>
> Cc: Christoph Lameter <cl@...ux.com>
> Cc: Kevin Hilman <khilman@...aro.org>
> Cc: Lai Jiangshan <laijs@...fujitsu.com>
> Cc: Mike Galbraith <bitbucket@...ine.de>
> Cc: Paul E. McKenney <paulmck@...ux.vnet.ibm.com>
> Cc: Tejun Heo <tj@...nel.org>
> Cc: Viresh Kumar <viresh.kumar@...aro.org>
> Signed-off-by: Frederic Weisbecker <fweisbec@...il.com>
Do you mind touching up the description and comment a bit as it's
going through you? They have gotten a lot better (kudos to Lai :) but
I still feel the need to touch them up a bit before applying. I'd
really appreciate if you can do it as part of your series.
> diff --git a/kernel/workqueue.c b/kernel/workqueue.c
> index c3f076f..c68e84f 100644
> --- a/kernel/workqueue.c
> +++ b/kernel/workqueue.c
> @@ -1355,8 +1355,16 @@ retry:
> * If @work was previously on a different pool, it might still be
> * running there, in which case the work needs to be queued on that
> * pool to guarantee non-reentrancy.
> + *
> + * We guarantee that only one pwq is active on an ordered workqueue.
> + * That alone enforces non-reentrancy for works. So ordered works don't
Let's use "work items" instead of "works".
> + * need to be requeued to their previous pool. Not to mention that
> + * an ordered work requeing itself over and over on the same pool may
> + * prevent a pwq from being released in case of a pool switch. The
> + * newest pool in that case couldn't switch in and its pending works
> + * would starve.
> */
> - last_pool = get_work_pool(work);
> + last_pool = wq->flags & __WQ_ORDERED ? NULL : get_work_pool(work);
> if (last_pool && last_pool != pwq->pool) {
> struct worker *worker;
I'm not a big fan of the fact that ordered queues need to be handled
differently when queueing, but as the code is currently written, this
is pretty much necessary to maintain execution order, right?
Otherwise, you end up with requeueing work items targeting the pwq it
was executing on and new ones targeting the newest one screwing up the
ordering. I think that'd be a lot more important to note in the
comment. This is a correctness measure. Back-to-back requeueing
being affected by this is just a side-effect.
Also, can you please use plain if/else instead of "?:"? This isn't a
simple logic and I don't think compressing it with ?: is a good idea.
> @@ -3708,6 +3712,13 @@ static void rcu_free_pwq(struct rcu_head *rcu)
> container_of(rcu, struct pool_workqueue, rcu));
> }
>
> +static struct pool_workqueue *oldest_pwq(struct workqueue_struct *wq)
> +{
> + return list_last_entry(&wq->pwqs, struct pool_workqueue, pwqs_node);
> +}
> +
> +static void pwq_adjust_max_active(struct pool_workqueue *pwq);
> +
> /*
> * Scheduled on system_wq by put_pwq() when an unbound pwq hits zero refcnt
> * and needs to be destroyed.
> @@ -3723,14 +3734,12 @@ static void pwq_unbound_release_workfn(struct work_struct *work)
> if (WARN_ON_ONCE(!(wq->flags & WQ_UNBOUND)))
> return;
>
> - /*
> - * Unlink @pwq. Synchronization against wq->mutex isn't strictly
> - * necessary on release but do it anyway. It's easier to verify
> - * and consistent with the linking path.
> - */
> mutex_lock(&wq->mutex);
> list_del_rcu(&pwq->pwqs_node);
> is_last = list_empty(&wq->pwqs);
> + /* try to activate the oldest pwq when needed */
> + if (!is_last && (wq->flags & __WQ_ORDERED))
Why bother with @is_last when it's used only once and the test is
trivial? Is the test even necessary? Invoking
pwq_adjust_max_active() directly should work, no? Also, this needs
whole lot more comment explaining what's going on.
> + pwq_adjust_max_active(oldest_pwq(wq));
> mutex_unlock(&wq->mutex);
>
> mutex_lock(&wq_pool_mutex);
> @@ -3749,6 +3758,16 @@ static void pwq_unbound_release_workfn(struct work_struct *work)
> }
> }
>
> +static bool pwq_active(struct pool_workqueue *pwq)
> +{
> + /* Only the oldest pwq is active in the ordered wq */
> + if (pwq->wq->flags & __WQ_ORDERED)
> + return pwq == oldest_pwq(pwq->wq);
> +
> + /* All pwqs in the non-ordered wq are active */
> + return true;
> +}
Just collapse it into the calling function. This obfuscates more than
helps.
Thanks.
--
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists