[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <351ef694-92b5-bd43-e766-19e1a1e71453@linux.alibaba.com>
Date: Wed, 4 Jul 2018 14:56:42 +0800
From: Xunlei Pang <xlpang@...ux.alibaba.com>
To: Tejun Heo <tj@...nel.org>, Peter Zijlstra <peterz@...radead.org>
Cc: Ingo Molnar <mingo@...hat.com>,
Frederic Weisbecker <frederic@...nel.org>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] sched/cputime: Ensure correct utime and stime proportion
On 7/2/18 11:21 PM, Tejun Heo wrote:
> Hello, Peter.
>
> On Tue, Jun 26, 2018 at 05:49:08PM +0200, Peter Zijlstra wrote:
>> Well, no, because the Changelog is incomprehensible and the patch
>> doesn't really have useful comments, so I'll have to reverse engineer
>> the entire thing, and I've just not had time for that.
>
> Just as an additional data point, we also sometimes see artifacts from
> cpu_adjust_time() in the form of per-task user or sys time getting
> stuck for some period (in extreme cases for over a minute) while the
> application isn't doing anything differently. We're telling the users
> that it's an inherent sampling artifact but it'd be nice to improve it
> if possible without adding noticeable overhead. No idea whether this
> patch's approach is a good one tho.
The patch has no noticeable overhead except the extra cputime fileds
added into task_struct. We've been running this patch on our servers
for months, looks good till now.
Thanks!
Powered by blists - more mailing lists