[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100727210210.58d3118c@infradead.org>
Date: Tue, 27 Jul 2010 21:02:10 -0700
From: Arjan van de Ven <arjan@...radead.org>
To: linux-kernel@...r.kernel.org
Cc: tglx@...utronix.de, linux-arch@...r.kernel.org
Subject: [patch] Remove the per cpu tick skew
Hi,
the following patch is a win for power management on x86....
... but since this touches generic code.. are there any
other architectures that would be negatively affected by this?
Subject: [patch] Remove the per cpu tick skew
Historically, Linux has tried to make the regular timer tick on the various
CPUs not happen at the same time, to avoid contention on xtime_lock.
Nowadays, with the tickless kernel, this contention no longer happens
since time keeping and updating are done differently. In addition,
this skew is actually hurting power consumption in a measurable
way on many-core systems.
Signed-off-by: Arjan van de Ven <arjan@...ux.intel.com>
--- linux.trees.git/kernel/time/tick-sched.c~ 2010-07-16 09:40:50.000000000 -0400
+++ linux.trees.git/kernel/time/tick-sched.c 2010-07-26 11:18:51.138003329 -0400
@@ -780,7 +780,6 @@
{
struct tick_sched *ts = &__get_cpu_var(tick_cpu_sched);
ktime_t now = ktime_get();
- u64 offset;
/*
* Emulate tick processing via per-CPU hrtimers:
@@ -790,10 +789,6 @@
/* Get the next period (per cpu) */
hrtimer_set_expires(&ts->sched_timer, tick_init_jiffy_update());
- offset = ktime_to_ns(tick_period) >> 1;
- do_div(offset, num_possible_cpus());
- offset *= smp_processor_id();
- hrtimer_add_expires_ns(&ts->sched_timer, offset);
for (;;) {
hrtimer_forward(&ts->sched_timer, now, tick_period);
--
Arjan van de Ven Intel Open Source Technology Centre
For development, discussion and tips for power savings,
visit http://www.lesswatts.org
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists