lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150512131633.GK11388@htj.duckdns.org>
Date:	Tue, 12 May 2015 09:16:33 -0400
From:	Tejun Heo <tj@...nel.org>
To:	Lai Jiangshan <laijs@...fujitsu.com>
Cc:	linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/5] workqueue: merge the similar code

Hello,

On Tue, May 12, 2015 at 10:03:11AM +0800, Lai Jiangshan wrote:
> > @cpu_off sounds like offset into cpu array or sth.  Is there a reason
> > to change the name?
> 
> @cpu_off is a local variable in wq_update_unbound_numa() and is a shorter
> name.

Let's stick with the other name.

> > 
> >> + *
> >> + * Allocate or reuse a pwq with the cpumask that @wq should use on @node.
> > 
> > I wonder whether a better name for the function would be sth like
> > get_alloc_node_unbound_pwq().
> > 
> 
> The name length of alloc_node_unbound_pwq() had already added trouble to me
> for code-indent in the called-site.  I can add a variable to ease the indent
> problem later, but IMHO, get_alloc_node_unbound_pwq() is not strictly a better
> name over alloc_node_unbound_pwq().  Maybe we can consider get_node_unbound_pwq()?

Hmmm... the thing w/ "get" is that it gets confusing w/ refcnt ops.
alloc is kinda misleading and we do use concatenations of two verbs
for things like this - find_get, lookup_create and so on.  If the name
is too long (is it really tho?), do we really need "node" in the name?

> >>   *
> >> - * Calculate the cpumask a workqueue with @attrs should use on @node.  If
> >> - * @cpu_going_down is >= 0, that cpu is considered offline during
> >> - * calculation.  The result is stored in @cpumask.
> >> + * If NUMA affinity is not enabled, @dfl_pwq is always used.  @dfl_pwq
> >> + * was allocated with the effetive attrs saved in @dfl_pwq->pool->attrs.
> > 
> > I'm not sure we need the second sentence.
> 
> effetive -> effective
> 
> I used "the effetive attrs" twice bellow.  I need help to rephrase it,
> might you do me a favor? Or just use it without introducing it at first?

It's just a bit unnecessary to re-state where dfl_pwq is allocated
here.  It's an invariant which is explained where it's set-up.  I
don't think we need extra explanation here.

> >> +	if (cpumask_equal(cpumask, attrs->cpumask))
> >> +		goto use_dfl;
> >> +	if (pwq && wqattrs_equal(tmp_attrs, pwq->pool->attrs))
> >> +		goto use_existed;
> > 
> > 		goto use_current;
> 
> The label use_existed is shared with use_dfl:

It's just bad phrasing.  If you want to use "exist", you can say
use_existing as in "use (the) existing (one)".

> use_dfl:
> 	pwq = dfl_pwq;
> use_existed:
> 	spin_lock_irq(&pwq->pool->lock);
> 	get_pwq(pwq);
> 	spin_unlock_irq(&pwq->pool->lock);
> 	return pwq;
> 
> But I don't think the dfl_pwq is current.

I don't think we generally try to make the combination of consecutive
goto labels come out as a coherent narrative.

> >> +
> >> +	/* create a new pwq */
> >> +	pwq = alloc_unbound_pwq(wq, tmp_attrs);
> >> +	if (!pwq && use_dfl_when_fail) {
> >> +		pr_warn("workqueue: allocation failed while updating NUMA affinity of \"%s\"\n",
> >> +			wq->name);
> >> +		goto use_dfl;
> > 
> > Does this need to be in this function?  Can't we let the caller handle
> > the fallback instead?
> 
> Will it leave the duplicated code that this patch tries to remove?

Even if it ends up several more lines of code, I think that'd be
cleaner.  Look at the parameters this function is taking.  It looks
almost incoherent.

Thanks.

-- 
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ