[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <OSBPR01MB21837C8931D90AE55AF4A955EB529@OSBPR01MB2183.jpnprd01.prod.outlook.com>
Date: Wed, 12 May 2021 03:28:02 +0000
From: "hasegawa-hitomi@...itsu.com" <hasegawa-hitomi@...itsu.com>
To: "'fweisbec@...il.com'" <fweisbec@...il.com>,
"'tglx@...utronix.de'" <tglx@...utronix.de>,
"'mingo@...nel.org'" <mingo@...nel.org>,
"'peterz@...radead.org'" <peterz@...radead.org>,
"'juri.lelli@...hat.com'" <juri.lelli@...hat.com>,
"'vincent.guittot@...aro.org'" <vincent.guittot@...aro.org>
CC: "'dietmar.eggemann@....com'" <dietmar.eggemann@....com>,
"'rostedt@...dmis.org'" <rostedt@...dmis.org>,
"'bsegall@...gle.com'" <bsegall@...gle.com>,
"'mgorman@...e.de'" <mgorman@...e.de>,
"'bristot@...hat.com'" <bristot@...hat.com>,
"'linux-kernel@...r.kernel.org'" <linux-kernel@...r.kernel.org>
Subject: Utime and stime are less when getrusage (RUSAGE_THREAD) is executed
on a tickless CPU.
Hello.
I found that when I run getrusage(RUSAGE_THREAD) on a tickless CPU, the utime and stime I get are less than the actual time, unlike when I run getrusage(RUSAGE_SELF) on a single thread.
This problem seems to be caused by the fact that se.sum_exec_runtime is not updated just before getting the information from 'current'.
In the current implementation, task_cputime_adjusted() calls task_cputime() to get the 'current' utime and stime, then calls cputime_adjust() to adjust the sum of utime and stime to be equal to cputime.sum_exec_runtime. On a tickless CPU, sum_exec_runtime is not updated periodically, so there seems to be a discrepancy with the actual time.
Therefore, I think I should include a process to update se.sum_exec_runtime just before getting the information from 'current' (as in other processes except RUSAGE_THREAD). I'm thinking of the following improvement.
@@ void getrusage(struct task_struct *p, int who, struct rusage *r)
if (who == RUSAGE_THREAD) {
+ task_sched_runtime(current);
task_cputime_adjusted(current, &utime, &stime);
Is there any possible problem with this?
Thanks.
Hitomi Hasegawa
Powered by blists - more mailing lists