[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <1306279146-23487-1-git-send-email-snanda@chromium.org>
Date: Tue, 24 May 2011 16:19:06 -0700
From: Sameer Nanda <snanda@...omium.org>
To: akpm@...ux-foundation.org, ext-phil.2.carmody@...ia.com,
Tim.Deegan@...rix.com, jbeulich@...ell.com, snanda@...gle.com
Cc: linux-kernel@...r.kernel.org, Sameer Nanda <snanda@...omium.org>
Subject: [PATCH] init: skip calibration delay if previously done
For each CPU, do the calibration delay only once. For subsequent calls,
use the cached per-CPU value of loops_per_jiffy.
This saves about 200ms of resume time on dual core Intel Atom N5xx based
systems. This helps bring down the kernel resume time on such systems from
about 500ms to about 300ms.
Signed-off-by: Sameer Nanda <snanda@...omium.org>
---
init/calibrate.c | 10 +++++++++-
1 files changed, 9 insertions(+), 1 deletions(-)
diff --git a/init/calibrate.c b/init/calibrate.c
index 76ac919..47d3408 100644
--- a/init/calibrate.c
+++ b/init/calibrate.c
@@ -183,11 +183,18 @@ recalibrate:
return lpj;
}
+DEFINE_PER_CPU(unsigned long, cpu_loops_per_jiffy) = { 0 };
+
void __cpuinit calibrate_delay(void)
{
static bool printed;
+ int this_cpu = smp_processor_id();
- if (preset_lpj) {
+ if (per_cpu(cpu_loops_per_jiffy, this_cpu)) {
+ loops_per_jiffy = per_cpu(cpu_loops_per_jiffy, this_cpu);
+ pr_info("Calibrating delay loop (skipped) "
+ "already calibrated this CPU previously.. ");
+ } else if (preset_lpj) {
loops_per_jiffy = preset_lpj;
if (!printed)
pr_info("Calibrating delay loop (skipped) "
@@ -205,6 +212,7 @@ void __cpuinit calibrate_delay(void)
pr_info("Calibrating delay loop... ");
loops_per_jiffy = calibrate_delay_converge();
}
+ per_cpu(cpu_loops_per_jiffy, this_cpu) = loops_per_jiffy;
if (!printed)
pr_cont("%lu.%02lu BogoMIPS (lpj=%lu)\n",
loops_per_jiffy/(500000/HZ),
--
1.7.3.1
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists