[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAPM31RKSUManoOO9QO5CafVGUHZGafekstF8Xx06pdBL+E93sQ@mail.gmail.com>
Date: Tue, 2 Apr 2013 18:23:19 -0700
From: Paul Turner <pjt@...gle.com>
To: Alex Shi <alex.shi@...el.com>
Cc: Vincent Guittot <vincent.guittot@...aro.org>,
"mingo@...hat.com" <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Thomas Gleixner <tglx@...utronix.de>,
Andrew Morton <akpm@...ux-foundation.org>,
Arjan van de Ven <arjan@...ux.intel.com>,
Borislav Petkov <bp@...en8.de>,
Namhyung Kim <namhyung@...nel.org>,
Mike Galbraith <efault@....de>, gregkh@...uxfoundation.org,
Preeti U Murthy <preeti@...ux.vnet.ibm.com>,
Viresh Kumar <viresh.kumar@...aro.org>,
linux-kernel <linux-kernel@...r.kernel.org>
Subject: Re: [patch v6 03/21] sched: only count runnable avg on cfs_rq's nr_running
Nack:
Vincent is correct, rq->avg is supposed to be the average time that an
rq is runnable; this includes (for example) SCHED_RT.
It's intended to be more useful as a hint towards something like a
power governor which wants to know how busy the CPU is in general.
> On the other side, periodic LB balance on combined the cfs/rt load, but
> removed the RT utilisation in cpu_power.
This I don't quite understand; these inputs are already time scaled (by decay).
Stated alternatively, what you want is:
"average load" / "available power", which is:
(rq->cfs.runnable_load_avg + rq->cfs.blocked_load_avg) / (cpu power
scaled for rt)
Where do you propose mixing rq->avg into that?
On Tue, Apr 2, 2013 at 6:02 PM, Alex Shi <alex.shi@...el.com> wrote:
> On 04/02/2013 10:30 PM, Vincent Guittot wrote:
>> On 30 March 2013 15:34, Alex Shi <alex.shi@...el.com> wrote:
>>> Old function count the runnable avg on rq's nr_running even there is
>>> only rt task in rq. That is incorrect, so correct it to cfs_rq's
>>> nr_running.
>>>
>>> Signed-off-by: Alex Shi <alex.shi@...el.com>
>>> ---
>>> kernel/sched/fair.c | 2 +-
>>> 1 file changed, 1 insertion(+), 1 deletion(-)
>>>
>>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>>> index 2881d42..026e959 100644
>>> --- a/kernel/sched/fair.c
>>> +++ b/kernel/sched/fair.c
>>> @@ -2829,7 +2829,7 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags)
>>> }
>>>
>>> if (!se) {
>>> - update_rq_runnable_avg(rq, rq->nr_running);
>>> + update_rq_runnable_avg(rq, rq->cfs.nr_running);
>>
>> A RT task that preempts your CFS task will be accounted in the
>> runnable_avg fields. So whatever you do, RT task will impact your
>> runnable_avg statistics. Instead of trying to get only CFS tasks, you
>> should take into account all tasks activity in the rq.
>
> Thanks for comments, Vincent!
>
> Yes, I know some rt task time was counted into cfs, but now we have no
> good idea to remove them clearly. So I just want to a bit more precise
> cfs runnable load here.
> On the other side, periodic LB balance on combined the cfs/rt load, but
> removed the RT utilisation in cpu_power.
>
> So, PJT, Peter, what's your idea of this point?
>>
>> Vincent
>>> inc_nr_running(rq);
>>> }
>>> hrtick_update(rq);
>>> --
>>> 1.7.12
>>>
>
>
> --
> Thanks Alex
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists