[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20130512113157.GG3648@linux.vnet.ibm.com>
Date: Sun, 12 May 2013 04:31:57 -0700
From: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To: Bjørn Mork <bjorn@...k.no>
Cc: "Paul E. McKenney" <paul.mckenney@...aro.org>,
linux-kernel@...r.kernel.org
Subject: Re: Bisected post-3.9 regression: Resume takes 5 times as much time
as with v3.9
On Sat, May 11, 2013 at 08:04:50PM +0200, Bjørn Mork wrote:
> Hello,
>
> resuming from system suspend is intolerably slow in current mainline. I
> am not the most patient person around, and I actually started out
> bisecting this believing it was hanging... Turned out it wasn't really
> hanging. It just took 15 seconds to wake up from suspend-to-ram.
> Timing v3.9 I found that it used less than 3 seconds on this ancient
> laptop I'm using.
>
> Bisecting it ended up pointing to
>
> commit c0f4dfd4f90f1667d234d21f15153ea09a2eaa66
> Author: Paul E. McKenney <paul.mckenney@...aro.org>
> Date: Fri Dec 28 11:30:36 2012 -0800
>
> rcu: Make RCU_FAST_NO_HZ take advantage of numbered callbacks
>
> Because RCU callbacks are now associated with the number of the grace
> period that they must wait for, CPUs can now take advance callbacks
> corresponding to grace periods that ended while a given CPU was in
> dyntick-idle mode. This eliminates the need to try forcing the RCU
> state machine while entering idle, thus reducing the CPU intensiveness
> of RCU_FAST_NO_HZ, which should increase its energy efficiency.
>
> Signed-off-by: Paul E. McKenney <paul.mckenney@...aro.org>
> Signed-off-by: Paul E. McKenney <paulmck@...ux.vnet.ibm.com>
>
>
>
> Being a big patch, I'm pretty sure that the problem is some minor
> issue. But rather than trying to userstand this, just tried reverting
> it on top of the current mainline and can confirm that this fixes the
> regression. I'll leave the understanding to you :)
>
> I'm attaching the revert patch as I had to fix a conflict, and may have
> done something wrong there. I'm also attaching my .config.
>
> Let me know if you need more information, or want me to try out proposed
> fixes.
We don't want to back out the RCU_FAST_NO_HZ changes due to their
energy-efficiency benefits. So could you please try out Borislav's
patch below? He ran into the same issue a few weeks ago, and this
one fixed it for him.
Thanx, Paul
------------------------------------------------------------------------
rcu: Expedite grace periods during suspend/resume
CONFIG_RCU_FAST_NO_HZ can increase grace-period durations by up to
a factor of four, which can result in long suspend and resume times.
Thus, this commit temporarily switches to expedited grace periods when
suspending the box and return to normal settings when resuming.
Signed-off-by: Borislav Petkov <bp@...e.de>
Signed-off-by: Paul E. McKenney <paulmck@...ux.vnet.ibm.com>
diff --git a/kernel/rcutree.c b/kernel/rcutree.c
index a9610d1..d9604a4 100644
--- a/kernel/rcutree.c
+++ b/kernel/rcutree.c
@@ -53,6 +53,7 @@
#include <linux/delay.h>
#include <linux/stop_machine.h>
#include <linux/random.h>
+#include <linux/suspend.h>
#include "rcutree.h"
#include <trace/events/rcu.h>
@@ -3003,6 +3004,22 @@ static int __cpuinit rcu_cpu_notify(struct notifier_block *self,
return NOTIFY_OK;
}
+static int rcu_pm_notify(struct notifier_block *self,
+ unsigned long action, void *hcpu)
+{
+ switch (action) {
+ case PM_HIBERNATION_PREPARE:
+ rcu_expedited = 1;
+ break;
+ case PM_POST_RESTORE:
+ rcu_expedited = 0;
+ break;
+ default:
+ break;
+ }
+ return NOTIFY_OK;
+}
+
/*
* Spawn the kthread that handles this RCU flavor's grace periods.
*/
@@ -3243,6 +3260,7 @@ void __init rcu_init(void)
* or the scheduler are operational.
*/
cpu_notifier(rcu_cpu_notify, 0);
+ pm_notifier(rcu_pm_notify, 0);
for_each_online_cpu(cpu)
rcu_cpu_notify(NULL, CPU_UP_PREPARE, (void *)(long)cpu);
}
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists