[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <21628e16-5d34-d7f6-8c10-ff354b4e7b35@linux.alibaba.com>
Date: Tue, 3 Jul 2018 10:10:05 +0800
From: 王贇 <yun.wang@...ux.alibaba.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Ingo Molnar <mingo@...hat.com>, linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH] tg: count the sum wait time of an task group
Hi, Peter
On 2018/7/2 下午8:03, Peter Zijlstra wrote:
> On Mon, Jul 02, 2018 at 03:29:39PM +0800, 王贇 wrote:
>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>> index 1866e64..ef82ceb 100644
>> --- a/kernel/sched/fair.c
>> +++ b/kernel/sched/fair.c
>> @@ -862,6 +862,7 @@ static void update_curr_fair(struct rq *rq)
>> static inline void
>> update_stats_wait_end(struct cfs_rq *cfs_rq, struct sched_entity *se)
>> {
>> + struct task_group *tg;
>> struct task_struct *p;
>> u64 delta;
>>
>> @@ -882,6 +883,9 @@ static void update_curr_fair(struct rq *rq)
>> return;
>> }
>> trace_sched_stat_wait(p, delta);
>> + } else {
>> + tg = group_cfs_rq(se)->tg;
>> + __schedstat_add(tg->wait_sum, delta);
>> }
>
> You're joking right? This patch is both broken and utterly insane.
>
> You're wanting to update an effectively global variable for every
> schedule action (and its broken because it is without any serialization
> or atomics).
Thanks for the reply and sorry for the thoughtless, I'll rewrite
the code to make it per-cpu variable, then assemble the results
on show.
Regards,
Michael Wang
>
> NAK
>
Powered by blists - more mailing lists