[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <514AD26F.9010905@intel.com>
Date: Thu, 21 Mar 2013 17:27:11 +0800
From: Alex Shi <alex.shi@...el.com>
To: Preeti U Murthy <preeti@...ux.vnet.ibm.com>
CC: mingo@...hat.com, peterz@...radead.org, efault@....de,
torvalds@...ux-foundation.org, tglx@...utronix.de,
akpm@...ux-foundation.org, arjan@...ux.intel.com, bp@...en8.de,
pjt@...gle.com, namhyung@...nel.org, vincent.guittot@...aro.org,
gregkh@...uxfoundation.org, viresh.kumar@...aro.org,
linux-kernel@...r.kernel.org, morten.rasmussen@....com
Subject: Re: [patch v5 14/15] sched: power aware load balance
On 03/21/2013 04:41 PM, Preeti U Murthy wrote:
>> >
> Yes, I did find this behaviour on a 2 socket, 8 core machine very
> consistently.
>
> rq->util cannot go to 0, after it has begun accumulating load right?
>
> Say a load was running on a runqueue which had its rq->util to be at
> 100%. After the load finishes, the runqueue goes idle. For every
> scheduler tick, its utilisation decays. But can never become 0.
>
> rq->util = rq->avg.runnable_avg_sum/rq->avg.runnable_avg_period
did you close all of background system services?
In theory the rq->avg.runnable_avg_sum should be zero if there is no
task a bit long, otherwise there are some bugs in kernel. Could you
check the value under /proc/sched_debug?
--
Thanks
Alex
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists