[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <1463574454-3587-1-git-send-email-wanpeng.li@hotmail.com>
Date: Wed, 18 May 2016 20:27:34 +0800
From: Wanpeng Li <kernellwp@...il.com>
To: linux-kernel@...r.kernel.org, kvm@...r.kernel.org
Cc: Wanpeng Li <wanpeng.li@...mail.com>,
Ingo Molnar <mingo@...nel.org>,
"Peter Zijlstra (Intel)" <peterz@...radead.org>,
Rik van Riel <riel@...hat.com>,
Thomas Gleixner <tglx@...utronix.de>,
Frederic Weisbecker <fweisbec@...il.com>,
Paolo Bonzini <pbonzini@...hat.com>, Radim <rkrcmar@...hat.com>
Subject: [PATCH v3] sched/cputime: add steal time support to full dynticks CPU time accounting
From: Wanpeng Li <wanpeng.li@...mail.com>
This patch adds steal guest time support to full dynticks CPU
time accounting. After 'commit ff9a9b4c4334 ("sched, time: Switch
VIRT_CPU_ACCOUNTING_GEN to jiffy granularity")', time is jiffy
based sampling even if it's still listened to ring boundaries, so
steal_account_process_tick() is reused to account how much 'ticks'
are steal time after the last accumulation.
Suggested-by: Rik van Riel <riel@...hat.com>
Cc: Ingo Molnar <mingo@...nel.org>
Cc: Peter Zijlstra (Intel) <peterz@...radead.org>
Cc: Rik van Riel <riel@...hat.com>
Cc: Thomas Gleixner <tglx@...utronix.de>
Cc: Frederic Weisbecker <fweisbec@...il.com>
Cc: Paolo Bonzini <pbonzini@...hat.com>
Cc: Radim <rkrcmar@...hat.com>
Signed-off-by: Wanpeng Li <wanpeng.li@...mail.com>
---
v2 -> v3:
* convert steal time jiffies to cputime
v1 -> v2:
* fix divide zero bug, thanks Rik
kernel/sched/cputime.c | 19 +++++++++++++++++--
1 file changed, 17 insertions(+), 2 deletions(-)
diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c
index 75f98c5..f51c98c 100644
--- a/kernel/sched/cputime.c
+++ b/kernel/sched/cputime.c
@@ -257,7 +257,7 @@ void account_idle_time(cputime_t cputime)
cpustat[CPUTIME_IDLE] += (__force u64) cputime;
}
-static __always_inline bool steal_account_process_tick(void)
+static __always_inline unsigned long steal_account_process_tick(void)
{
#ifdef CONFIG_PARAVIRT
if (static_key_false(¶virt_steal_enabled)) {
@@ -279,7 +279,7 @@ static __always_inline bool steal_account_process_tick(void)
return steal_jiffies;
}
#endif
- return false;
+ return 0;
}
/*
@@ -691,8 +691,14 @@ static cputime_t get_vtime_delta(struct task_struct *tsk)
static void __vtime_account_system(struct task_struct *tsk)
{
+ cputime_t steal_time;
cputime_t delta_cpu = get_vtime_delta(tsk);
+ unsigned long delta_st = steal_account_process_tick();
+ steal_time = jiffies_to_cputime(delta_st);
+ if (steal_time >= delta_cpu)
+ return;
+ delta_cpu -= steal_time;
account_system_time(tsk, irq_count(), delta_cpu, cputime_to_scaled(delta_cpu));
}
@@ -723,7 +729,16 @@ void vtime_account_user(struct task_struct *tsk)
write_seqcount_begin(&tsk->vtime_seqcount);
tsk->vtime_snap_whence = VTIME_SYS;
if (vtime_delta(tsk)) {
+ cputime_t steal_time;
+ unsigned long delta_st = steal_account_process_tick();
delta_cpu = get_vtime_delta(tsk);
+ steal_time = jiffies_to_cputime(delta_st);
+
+ if (steal_time >= delta_cpu) {
+ write_seqcount_end(&tsk->vtime_seqcount);
+ return;
+ }
+ delta_cpu -= steal_time;
account_user_time(tsk, delta_cpu, cputime_to_scaled(delta_cpu));
}
write_seqcount_end(&tsk->vtime_seqcount);
--
1.9.1
Powered by blists - more mailing lists