[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1275516150.2913.276.camel@sbs-t61.sc.intel.com>
Date: Wed, 02 Jun 2010 15:02:30 -0700
From: Suresh Siddha <suresh.b.siddha@...el.com>
To: "svaidy@...ux.vnet.ibm.com" <svaidy@...ux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@...radead.org>, Ingo Molnar <mingo@...e.hu>,
Thomas Gleixner <tglx@...utronix.de>,
Arjan van de Ven <arjan@...ux.jf.intel.com>,
Venkatesh Pallipadi <venki@...gle.com>,
"ego@...ibm.com" <ego@...ibm.com>,
LKML <linux-kernel@...r.kernel.org>,
Dominik Brodowski <linux@...inikbrodowski.net>,
Nigel Cunningham <ncunningham@...a.org.au>
Subject: Re: [patch 7/7] timers: use nearest busy cpu for migrating timers
from an idle cpu
On Tue, 2010-06-01 at 16:37 -0700, Vaidyanathan Srinivasan wrote:
> * Suresh Siddha <suresh.b.siddha@...el.com> [2010-05-17 11:27:33]:
>
> > Currently we are migrating the unpinned timers from an idle to the cpu
> > doing idle load balancing (when all the cpus in the system are idle,
> > there is no idle load balacncing cpu and timers get added to the same idle cpu
> > where the request was made. So the current optimization works only on semi idle
> > system).
> >
> > And In semi idle system, we no longer have periodic ticks on the idle cpu
> > doing the idle load balancing on behalf of all the cpu's. Using that cpu
> > will add more delays to the timers than intended (as that cpu's timer base
> > may not be uptodate wrt jiffies etc). This was causing mysterious slowdowns
> > during boot etc.
>
> Hi Suresh,
>
> Can please give more info on why this caused delay in bootup or timer
> event. The jiffies should be updated even with the current push model
> right. We will still have some pinned timer on the idle cpu and the
> time base will have to be updated when the timer event happens.
with these changes, idle load balancer doesn't have periodic ticks in
idle. So for that cpu, timer_jiffies in timer base won't be uptodate
when another idle cpu adds timer to this cpu. So, we will introduce more
delays for the timers than expected.
> > For now, in the semi idle case, use the nearest busy cpu for migrating timers from an
> > idle cpu. This is good for power-savings anyway.
>
> Yes. This is good solution. But on a large system the only running
> cpu may accumulate too may timers that could affect the performance of
> the task running. We will need to test this out.
Yes. It will be good to have an impact of this on big systems. If we see
any performance issues, we can migrate the timers only when
power-savings tunable is selected.
>
> > #ifdef CONFIG_NO_HZ
> > +int get_nohz_timer_target(void)
> > +{
> > + int cpu = smp_processor_id();
> > + int i;
> > + struct sched_domain *sd;
> > +
> > + for_each_domain(cpu, sd) {
> > + for_each_cpu(i, sched_domain_span(sd))
> > + if (!idle_cpu(i))
> > + return i;
> > + }
> > + return cpu;
> > +}
>
> We will need a better way of finding the right CPU since this code
> will take longer time on a larger system with one or two busy cpus.
>
> We should perhaps pick the cpu from the compliment of the current
> nohz.grp_idle_mask or something derived from these masks instead of
> searching in the sched domain. Only advantage I see is that we will
> get the busy CPU nearest as in same node which is better.
Yes I wanted to migrate the timers to the nearest busy cpu aswell as
distribute the load among all the busy cpus (if there are multiple cpu's
busy).
thanks,
suresh
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists