[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.20.1704191028230.1829@nanos>
Date: Wed, 19 Apr 2017 10:31:08 +0200 (CEST)
From: Thomas Gleixner <tglx@...utronix.de>
To: Peter Zijlstra <peterz@...radead.org>
cc: LKML <linux-kernel@...r.kernel.org>,
John Stultz <john.stultz@...aro.org>,
Eric Dumazet <edumazet@...gle.com>,
Anna-Maria Gleixner <anna-maria@...utronix.de>,
"Rafael J. Wysocki" <rafael.j.wysocki@...el.com>,
linux-pm@...r.kernel.org, Arjan van de Ven <arjan@...radead.org>,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
Frederic Weisbecker <fweisbec@...il.com>,
Rik van Riel <riel@...hat.com>
Subject: Re: [patch V2 08/10] timer: Implement the hierarchical pull model
On Wed, 19 Apr 2017, Peter Zijlstra wrote:
> On Tue, Apr 18, 2017 at 01:11:10PM +0200, Thomas Gleixner wrote:
> > +static struct tmigr_group *tmigr_get_group(unsigned int node, unsigned int lvl)
> > +{
> > + struct tmigr_group *group;
> > +
> > + /* Try to attach to an exisiting group first */
> > + list_for_each_entry(group, &tmigr_level_list[lvl], list) {
> > + /*
> > + * If @lvl is below the cross numa node level, check
> > + * whether this group belongs to the same numa node.
> > + */
> > + if (lvl < tmigr_crossnode_level && group->numa_node != node)
> > + continue;
> > + /* If the group has capacity, use it */
> > + if (group->num_childs < tmigr_childs_per_group) {
> > + group->num_childs++;
> > + return group;
> > + }
>
> This would result in SMT siblings not sharing groups on regular Intel
> systems, right? Since they get enumerated last.
Indeed. Will fix.
> > + }
> > + /* Allocate and set up a new group */
> > + group = kzalloc_node(sizeof(*group), GFP_KERNEL, node);
> > + if (!group)
> > + return ERR_PTR(-ENOMEM);
> > +
> > + if (!zalloc_cpumask_var_node(&group->cpus, GFP_KERNEL, node)) {
> > + kfree(group);
> > + return ERR_PTR(-ENOMEM);
> > + }
>
> So if you place that cpumask last, you can do:
>
> group = kzalloc_node(sizeof(*group) + cpumask_size(),
> GFP_KERNEL, node);
Hrm, that would allocate extra space for CPUMASK_OFF_STACK=n. I'll have a
look.
Thanks,
tglx
Powered by blists - more mailing lists