[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YzQB8afi2rCPvuC1@hirez.programming.kicks-ass.net>
Date: Wed, 28 Sep 2022 10:12:33 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Zucheng Zheng <zhengzucheng@...wei.com>
Cc: mingo@...hat.com, juri.lelli@...hat.com,
vincent.guittot@...aro.org, dietmar.eggemann@....com,
rostedt@...dmis.org, bsegall@...gle.com, mgorman@...e.de,
bristot@...hat.com, vschneid@...hat.com, frederic@...nel.org,
hucool.lihua@...wei.com, linux-kernel@...r.kernel.org
Subject: Re: [PATCH -next] sched/cputime: Fix the time backward issue about
/proc/stat
On Wed, Sep 28, 2022 at 11:34:02AM +0800, Zucheng Zheng wrote:
> From: Zheng Zucheng <zhengzucheng@...wei.com>
>
> The cputime of cpuN read from /proc/stat has an issue of cputime descent.
> For example, the phenomenon of cputime descent of user is as followed:
>
> The value read first is 319, and the value read again is 318. As follows:
> first:
> cat /proc/stat | grep cpu1
> cpu1 319 0 496 41665 0 0 0 0 0 0
> again:
> cat /proc/stat | grep cpu1
> cpu1 318 0 497 41674 0 0 0 0 0 0
>
> The value read from /proc/stat should be monotonically increasing. Otherwise
> user may get incorrect CPU usage.
>
> The root cause of this problem is that, in the implementation of
> kcpustat_cpu_fetch_vtime, vtime->utime + delta is added to the stack variable
> cpustat instantaneously. If the task is switched between two times, the value
> added to cpustat for the second time may be smaller than that for the first time.
>
> CPU0 CPU1
> First:
> show_stat()
> ->kcpustat_cpu_fetch()
> ->kcpustat_cpu_fetch_vtime()
> ->cpustat[CPUTIME_USER] = kcpustat_cpu(cpu) + vtime->utime + delta rq->curr is task A
> A switch to B,and A->vtime->utime is less than 1 tick
> Then:
> show_stat()
> ->kcpustat_cpu_fetch()
> ->kcpustat_cpu_fetch_vtime()
> ->cpustat[CPUTIME_USER] = kcpustat_cpu(cpu) + vtime->utime + delta; rq->curr is task B
You're still not explaining where the time gets lost. And the patch is a
horrible band-aid.
What I think you're saying; after staring at this for a while, is that:
vtime_task_switch_generic()
__vtime_account_kernel(prev, vtime)
vtime_account_{guest,system}(tsk, vtime)
vtime->*time += get_vtime_delta()
if (vtime->*time >= TICK_NSEC)
account_*_time()
account_system_index_time()
task_group_account_field()
__this_cpu_add(kernel_cpustat.cpustat[index], tmp); <---- here
is not folding time into kernel_cpustat when the task vtime isn't at
least a tick's worth. And then when we switch to another task, we leak
time.
There's another problem here, vtime_task_switch_generic() should use a
single call to sched_clock() to compute the old vtime_delta and set the
new vtime->starttime, otherwise there's a time hole there as well.
This is all quite the maze and it really wants cleaning up, not be made
worse.
So I think you want to do two things:
- pull kernel_cpustat updates out of task_group_account_field()
and put them into vtime_task_switch_generic() to be purely
vtime->starttime based.
- make vtime_task_switch_generic() use a single sched_clock() call.
I did not audit all the flavours of cputime; there might be fallout, be
sure to cross compile a lot.
Frederic, you agree?
Powered by blists - more mailing lists