[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1315931775.5977.29.camel@twins>
Date: Tue, 13 Sep 2011 18:36:15 +0200
From: Peter Zijlstra <a.p.zijlstra@...llo.nl>
To: Srivatsa Vaddagiri <vatsa@...ux.vnet.ibm.com>
Cc: Paul Turner <pjt@...gle.com>,
Kamalesh Babulal <kamalesh@...ux.vnet.ibm.com>,
Vladimir Davydov <vdavydov@...allels.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Bharata B Rao <bharata@...ux.vnet.ibm.com>,
Dhaval Giani <dhaval.giani@...il.com>,
Vaidyanathan Srinivasan <svaidy@...ux.vnet.ibm.com>,
Ingo Molnar <mingo@...e.hu>,
Pavel Emelianov <xemul@...allels.com>
Subject: Re: CFS Bandwidth Control - Test results of cgroups tasks pinned vs
unpinnede
On Tue, 2011-09-13 at 21:51 +0530, Srivatsa Vaddagiri wrote:
> > I can't read it seems.. I thought you were talking about increasing the
> > period,
>
> Mm ..I brought up the increased lock contention with reference to this
> experimental result that I posted earlier:
>
> > Tuning min_interval and max_interval of various sched_domains to 1
> > and also setting sched_cfs_bandwidth_slice_us to 500 does cut down idle
> > time further to 2.7%
Yeah, that's the not being able to read part..
> Value of sched_cfs_bandwidth_slice_us was reduced from default of 5000us
> to 500us, which (along with reduction of min/max interval) helped cut down
> idle time further (3.9% -> 2.7%). I was commenting that this may not necessarily
> be optimal (as for example low 'sched_cfs_bandwidth_slice_us' could result
> in all cpus contending for cfs_b->lock very frequently).
Right.. so this seems to suggest you're migrating a lot.
Also what workload are we talking about? the insane one with 5 groups of
weight 1024?
Ramping up the frequency of the load-balancer and giving out smaller
slices is really anti-scalability.. I bet a lot of that 'reclaimed' idle
time is spend in system time.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists