[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <1464931288-5886-1-git-send-email-wanpeng.li@hotmail.com>
Date: Fri, 3 Jun 2016 13:21:28 +0800
From: Wanpeng Li <kernellwp@...il.com>
To: linux-kernel@...r.kernel.org, kvm@...r.kernel.org
Cc: Wanpeng Li <wanpeng.li@...mail.com>,
Ingo Molnar <mingo@...nel.org>,
"Peter Zijlstra (Intel)" <peterz@...radead.org>,
Rik van Riel <riel@...hat.com>,
Thomas Gleixner <tglx@...utronix.de>,
Frederic Weisbecker <fweisbec@...il.com>,
Paolo Bonzini <pbonzini@...hat.com>, Radim <rkrcmar@...hat.com>
Subject: [PATCH v2] sched/cputime: add steal clock warp handling
From: Wanpeng Li <wanpeng.li@...mail.com>
I observed that sometimes st is 100% instantaneous, then idle is 100%
even if there is a cpu hog on the guest cpu after the cpu hotplug comes
back(N.B. this can not always be readily reproduced). I add trace to
capture it as below:
cpuhp/1-12 [001] d.h1 167.461657: account_process_tick: steal = 1291385514, prev_steal_time = 0
cpuhp/1-12 [001] d.h1 167.461659: account_process_tick: steal_jiffies = 1291
<idle>-0 [001] d.h1 167.462663: account_process_tick: steal = 18732255, prev_steal_time = 1291000000
<idle>-0 [001] d.h1 167.462664: account_process_tick: steal_jiffies = 18446744072437
The steal clock warp and then steal_jiffies overflow.
Rik also pointed out to me:
| I have seen stuff like that with live migration too, in the past
This patch adds steal clock warp handling by a safe threshold to only
apply steal times that are positive and smaller than one second (as
long as nohz_full has the one second timer tick left), ignoring intervals
that are negative or longer than a second, and using those to sync up
the guest with the host.
Cc: Ingo Molnar <mingo@...nel.org>
Cc: Peter Zijlstra (Intel) <peterz@...radead.org>
Cc: Rik van Riel <riel@...hat.com>
Cc: Thomas Gleixner <tglx@...utronix.de>
Cc: Frederic Weisbecker <fweisbec@...il.com>
Cc: Paolo Bonzini <pbonzini@...hat.com>
Cc: Radim <rkrcmar@...hat.com>
Signed-off-by: Wanpeng Li <wanpeng.li@...mail.com>
---
v1 -> v2:
* update patch subject, description and comments
* deal with the case where steal time suddenly increases by a ludicrous amount
kernel/sched/cputime.c | 15 +++++++++++++--
1 file changed, 13 insertions(+), 2 deletions(-)
diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c
index f51c98c..751798a 100644
--- a/kernel/sched/cputime.c
+++ b/kernel/sched/cputime.c
@@ -262,17 +262,28 @@ static __always_inline unsigned long steal_account_process_tick(void)
#ifdef CONFIG_PARAVIRT
if (static_key_false(¶virt_steal_enabled)) {
u64 steal;
+ s64 delta;
unsigned long steal_jiffies;
steal = paravirt_steal_clock(smp_processor_id());
- steal -= this_rq()->prev_steal_time;
+ delta = steal - this_rq()->prev_steal_time;
+ /*
+ * Ignore this steal time difference if the guest and the host got
+ * out of sync. This can happen due to events like live migration,
+ * or CPU hotplug. The upper threshold is set to one second to match
+ * the one second timer tick with nohz_full.
+ */
+ if (unlikely(delta < 0 || delta > NSEC_PER_SEC)) {
+ this_rq()->prev_steal_time = steal;
+ return 0;
+ }
/*
* steal is in nsecs but our caller is expecting steal
* time in jiffies. Lets cast the result to jiffies
* granularity and account the rest on the next rounds.
*/
- steal_jiffies = nsecs_to_jiffies(steal);
+ steal_jiffies = nsecs_to_jiffies(delta);
this_rq()->prev_steal_time += jiffies_to_nsecs(steal_jiffies);
account_steal_time(jiffies_to_cputime(steal_jiffies));
--
1.9.1
Powered by blists - more mailing lists