[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANRm+CzusmR2ed3dogbZWGx27Z9e0re-OeYV-mXKu-+0ZR9B4Q@mail.gmail.com>
Date: Wed, 8 Jun 2016 19:11:56 +0800
From: Wanpeng Li <kernellwp@...il.com>
To: Paolo Bonzini <pbonzini@...hat.com>
Cc: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
kvm <kvm@...r.kernel.org>, Wanpeng Li <wanpeng.li@...mail.com>,
Ingo Molnar <mingo@...nel.org>,
"Peter Zijlstra (Intel)" <peterz@...radead.org>,
Rik van Riel <riel@...hat.com>,
Thomas Gleixner <tglx@...utronix.de>,
Frederic Weisbecker <fweisbec@...il.com>,
Radim Krčmář <rkrcmar@...hat.com>
Subject: Re: [PATCH v5 3/3] sched/cputime: Add steal time support to full
dynticks CPU time accounting
2016-06-08 18:14 GMT+08:00 Paolo Bonzini <pbonzini@...hat.com>:
>
>
> On 08/06/2016 05:05, Wanpeng Li wrote:
>> From: Wanpeng Li <wanpeng.li@...mail.com>
>>
>> This patch adds guest steal-time support to full dynticks CPU
>> time accounting. After the following commit:
>>
>> ff9a9b4c4334 ("sched, time: Switch VIRT_CPU_ACCOUNTING_GEN to jiffy granularity")
>>
>> ... time sampling became jiffy based, even if it's still listened
>> to ring boundaries, so steal_account_process_tick() is reused
>> to account how many 'ticks' are stolen-time, after the last accumulation.
>>
>> Suggested-by: Rik van Riel <riel@...hat.com>
>> Cc: Ingo Molnar <mingo@...nel.org>
>> Cc: Peter Zijlstra (Intel) <peterz@...radead.org>
>> Cc: Rik van Riel <riel@...hat.com>
>> Cc: Thomas Gleixner <tglx@...utronix.de>
>> Cc: Frederic Weisbecker <fweisbec@...il.com>
>> Cc: Paolo Bonzini <pbonzini@...hat.com>
>> Cc: Radim Krčmář <rkrcmar@...hat.com>
>> Signed-off-by: Wanpeng Li <wanpeng.li@...mail.com>
>> ---
>> v4 -> v5:
>> * apply same logic to account_idle_time, so change get_vtime_delta instead
>> v3 -> v4:
>> * fix grammar errors, thanks Ingo
>> * cleanup fragile codes, thanks Ingo
>> v2 -> v3:
>> * convert steal time jiffies to cputime
>> v1 -> v2:
>> * fix divide zero bug, thanks Rik
>>
>> kernel/sched/cputime.c | 13 +++++++++----
>> 1 file changed, 9 insertions(+), 4 deletions(-)
>>
>> diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c
>> index 75f98c5..b62f9f8 100644
>> --- a/kernel/sched/cputime.c
>> +++ b/kernel/sched/cputime.c
>> @@ -257,7 +257,7 @@ void account_idle_time(cputime_t cputime)
>> cpustat[CPUTIME_IDLE] += (__force u64) cputime;
>> }
>>
>> -static __always_inline bool steal_account_process_tick(void)
>> +static __always_inline unsigned long steal_account_process_tick(void)
>> {
>> #ifdef CONFIG_PARAVIRT
>> if (static_key_false(¶virt_steal_enabled)) {
>> @@ -279,7 +279,7 @@ static __always_inline bool steal_account_process_tick(void)
>> return steal_jiffies;
>> }
>> #endif
>> - return false;
>> + return 0;
>> }
>>
>> /*
>> @@ -681,12 +681,17 @@ static cputime_t vtime_delta(struct task_struct *tsk)
>> static cputime_t get_vtime_delta(struct task_struct *tsk)
>> {
>> unsigned long now = READ_ONCE(jiffies);
>> - unsigned long delta = now - tsk->vtime_snap;
>> + cputime_t delta_time, steal_time;
>>
>> + steal_time = jiffies_to_cputime(steal_account_process_tick());
>> + delta_time = jiffies_to_cputime(now - tsk->vtime_snap);
>> WARN_ON_ONCE(tsk->vtime_snap_whence == VTIME_INACTIVE);
>> tsk->vtime_snap = now;
>>
>> - return jiffies_to_cputime(delta);
>> + if (steal_time < delta_time)
>> + delta_time -= steal_time;
>> +
>> + return delta_time;
>
> I think this is wrong. If you get more steal time than delta time
> (which as Rik noticed can happen due to partial jiffies), you will end
> up accounting things twice, once in steal_account_process_tick and once
> here. In other words you'll get the exact bug you're trying to fix.
>
> The right thing is to add a max_jiffies argument to
> steal_account_process_tick. steal_account_process_tick will not attempt
> to remove more than max_jiffies. Here you pass delta_jiffies (i.e. now
> - tsk->vtime_snap) to steal_account_process_tick, existing callers can
> pass ULONG_MAX. You can then
>
> return jiffies_to_cputime(delta_jiffies - steal_jiffies);
>
> in get_vtime_delta and not worry about underflow.
I see, I will do it in next version.
Regards,
Wanpeng Li
Powered by blists - more mailing lists