[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <53747EE4.3020605@linux.vnet.ibm.com>
Date: Thu, 15 May 2014 16:46:28 +0800
From: Michael wang <wangyun@...ux.vnet.ibm.com>
To: Peter Zijlstra <peterz@...radead.org>
CC: Rik van Riel <riel@...hat.com>,
LKML <linux-kernel@...r.kernel.org>,
Ingo Molnar <mingo@...nel.org>, Mike Galbraith <efault@....de>,
Alex Shi <alex.shi@...aro.org>, Paul Turner <pjt@...gle.com>,
Mel Gorman <mgorman@...e.de>,
Daniel Lezcano <daniel.lezcano@...aro.org>
Subject: Re: [ISSUE] sched/cgroup: Does cpu-cgroup still works fine nowadays?
On 05/15/2014 04:35 PM, Peter Zijlstra wrote:
> On Thu, May 15, 2014 at 11:46:06AM +0800, Michael wang wrote:
>> But for the dbench, stress combination, that's not spin-wasted, dbench
>> throughput do dropped, how could we explain that one?
>
> I've no clue what dbench does.. At this point you'll have to
> expose/trace the per-task runtime accounting for these tasks and ideally
> also the things the cgroup code does with them to see if it still makes
> sense.
I see :)
BTW, some interesting thing we found during the dbench/stress testing
is, by doing:
echo 240000000 > /proc/sys/kernel/sched_latency_ns
echo NO_GENTLE_FAIR_SLEEPERS > /sys/kernel/debug/sched_features
that is sched_latency_ns increased around 10 times and
GENTLE_FAIR_SLEEPERS was disabled, the dbench got it's CPU back.
However, when the group level is too deep, that doesn't works any more...
I'm not sure but seems like 'deep group level' and 'vruntime bonus for
sleeper' is the keep points here, will try to list the root cause after
more investigation, thanks for the hints and suggestions, really helpful ;-)
Regards,
Michael Wang
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists