[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110913050306.GB7254@linux.vnet.ibm.com>
Date: Tue, 13 Sep 2011 10:33:06 +0530
From: Srivatsa Vaddagiri <vatsa@...ux.vnet.ibm.com>
To: Peter Zijlstra <a.p.zijlstra@...llo.nl>
Cc: Paul Turner <pjt@...gle.com>,
Kamalesh Babulal <kamalesh@...ux.vnet.ibm.com>,
Vladimir Davydov <vdavydov@...allels.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Bharata B Rao <bharata@...ux.vnet.ibm.com>,
Dhaval Giani <dhaval.giani@...il.com>,
Vaidyanathan Srinivasan <svaidy@...ux.vnet.ibm.com>,
Ingo Molnar <mingo@...e.hu>,
Pavel Emelianov <xemul@...allels.com>
Subject: Re: CFS Bandwidth Control - Test results of cgroups tasks pinned vs
unpinnede
* Srivatsa Vaddagiri <vatsa@...ux.vnet.ibm.com> [2011-09-13 09:45:45]:
> * Peter Zijlstra <a.p.zijlstra@...llo.nl> [2011-09-12 14:35:43]:
>
> > Of course it does.. and I bet you can improve that slightly if you
> > manage to fix some of the numerical nightmares that live in the cgroup
> > load-balancer (Paul, care to share your WIP?)
>
> Booting with "nohz=off" also helps significantly.
>
> With nohz=on, average idle time (over 1 min) is 10.3%
> With nohz=off, average idle time (over 1 min) is 3.9%
Tuning min_interval and max_interval of various sched_domains to 1 [a]
and also setting sched_cfs_bandwidth_slice_us to 500 does cut down idle
time further to 2.7% ..
This is perhaps not optimal (as it may lead to more lock contentions), but
something to note for those who care for both capping and utilization in
equal measure!
- vatsa
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists