[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <1319132334.8653.4.camel@laptop>
Date: Thu, 20 Oct 2011 19:38:54 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Venki Pallipadi <venki@...gle.com>
Cc: Andi Kleen <andi@...stfloor.org>,
Tim Chen <tim.c.chen@...ux.intel.com>,
Ingo Molnar <mingo@...e.hu>, linux-kernel@...r.kernel.org,
Suresh Siddha <suresh.b.siddha@...el.com>
Subject: Re: [Patch] Idle balancer: cache align nohz structure to improve
idle load balancing scalability
On Thu, 2011-10-20 at 05:26 -0700, Venki Pallipadi wrote:
> On Wed, Oct 19, 2011 at 9:24 PM, Andi Kleen <andi@...stfloor.org> wrote:
> > Tim Chen <tim.c.chen@...ux.intel.com> writes:
> >> */
> >> static struct {
> >> - atomic_t load_balancer;
> >> - atomic_t first_pick_cpu;
> >> - atomic_t second_pick_cpu;
> >> - cpumask_var_t idle_cpus_mask;
> >> + atomic_t load_balancer ____cacheline_aligned;
> >> + atomic_t first_pick_cpu ____cacheline_aligned;
> >> + atomic_t second_pick_cpu ____cacheline_aligned;
> >> + cpumask_var_t idle_cpus_mask ____cacheline_aligned;
> >
> > On large configs idle_cpu_masks may be allocated. May need
> > more changes to tell the allocator to cache align/pad too?
> >
>
> An alternate approach is to split this struct per node/socket and do
> the nohz idle balancing logic at that level. That should be more
> scalable in terms of nohz balancing (ensure one CPU wont be doing nohz
> balancing for huge number of idle CPUs). I had looked at that approach
> couple of years earlier and couldn't measure that much of a gain. May
> be it is time to revisit that with increased core count.
Yeah, that would be best, although I remember there was a problem with
your approach back then, a fully idle node would not balance at all or
something like that.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists