[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4B0B7036.3080803@jp.fujitsu.com>
Date: Tue, 24 Nov 2009 14:33:42 +0900
From: Hidetoshi Seto <seto.hidetoshi@...fujitsu.com>
To: Stanislaw Gruszka <sgruszka@...hat.com>
CC: Peter Zijlstra <peterz@...radead.org>,
Spencer Candland <spencer@...ehost.com>,
Américo Wang <xiyou.wangcong@...il.com>,
linux-kernel@...r.kernel.org, Ingo Molnar <mingo@...e.hu>,
Oleg Nesterov <oleg@...hat.com>,
Balbir Singh <balbir@...ibm.com>
Subject: Re: [PATCH] fix granularity of task_u/stime(), v2
Stanislaw Gruszka wrote:
> On Fri, Nov 20, 2009 at 11:00:21AM +0900, Hidetoshi Seto wrote:
>> E.g. assume that there are 2 tasks:
>>
>> Task A: interrupted by timer few times
>> (utime, stime, se.sum_sched_runtime) = (50, 50, 1000000000)
>> => total of runtime is 1 sec, but utime + stime is 100 ms
>>
>> Task B: interrupted by timer many times
>> (utime, stime, se.sum_sched_runtime) = (50, 50, 10000000)
>> => total of runtime is 10 ms, but utime + stime is 100 ms
>
> How tis is probable, that task is running very long, but not getting
> the ticks ? I know this is possible, otherwise we will not see utime
> decreasing after do_sys_times() siglock fix, but how probable?
For example, assume a task like watchdog that calls sleep soon after
its work. Such task will be woken up by a timer interrupt on other
task and queued to run queue. Once it get a cpu it can finish its
work before next tick. So it can run long without getting any ticks
on it. I suppose you can find such tasks in monitoring tool which
contains sampling threads that behaves like watchdog.
As the side effect, since such tasks tend to use cpu between tick
period, they make other tasks to more likely be interrupted by ticks.
>>>> diff --git a/kernel/posix-cpu-timers.c b/kernel/posix-cpu-timers.c
>>>>>> index 5c9dc22..e065b8a 100644
>>>>>> --- a/kernel/posix-cpu-timers.c
>>>>>> +++ b/kernel/posix-cpu-timers.c
>>>>>> @@ -248,8 +248,8 @@ void thread_group_cputime(struct task_struct *tsk, struct task_cputime *times)
>>>>>>
>>>>>> t = tsk;
>>>>>> do {
>>>>>> - times->utime = cputime_add(times->utime, t->utime);
>>>>>> - times->stime = cputime_add(times->stime, t->stime);
>>>>>> + times->utime = cputime_add(times->utime, task_utime(t));
>>>>>> + times->stime = cputime_add(times->stime, task_stime(t));
>>>>>> times->sum_exec_runtime += t->se.sum_exec_runtime;
>>>>>>
>>>>>> t = next_thread(t);
>
> That works for me and I agree that this is right fix. Peter had concerns
> about p->prev_utime races and additional need for further propagation of
> task_{s,u}time() to posix-cpu-timers code. However I do not understand
> these problems.
I think that one of our concerns is the cost of task_{s,u}time(), which
might bring other problem if they are propagated. But I found we can reduce
the cost (about the half, or more), that is why I posted task_times() patch
in other thread in LKML.
Thanks,
H.Seto
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists