[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1287047996.29097.173.camel@twins>
Date: Thu, 14 Oct 2010 11:19:56 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: bharata@...ux.vnet.ibm.com
Cc: linux-kernel@...r.kernel.org,
Dhaval Giani <dhaval.giani@...il.com>,
Balbir Singh <balbir@...ux.vnet.ibm.com>,
Vaidyanathan Srinivasan <svaidy@...ux.vnet.ibm.com>,
Srivatsa Vaddagiri <vatsa@...ibm.com>,
Kamalesh Babulal <kamalesh@...ux.vnet.ibm.com>,
Ingo Molnar <mingo@...e.hu>,
Pavel Emelyanov <xemul@...nvz.org>,
Herbert Poetzl <herbert@...hfloor.at>,
Avi Kivity <avi@...hat.com>,
Chris Friesen <cfriesen@...tel.com>,
Paul Menage <menage@...gle.com>,
Mike Waychison <mikew@...gle.com>,
Paul Turner <pjt@...gle.com>, Nikhil Rao <ncrao@...gle.com>
Subject: Re: [PATCH v3 2/7] sched: accumulate per-cfs_rq cpu usage
On Tue, 2010-10-12 at 13:21 +0530, Bharata B Rao wrote:
> +#ifdef CONFIG_CFS_BANDWIDTH
> + {
> + .procname = "sched_cfs_bandwidth_slice_us",
> + .data = &sysctl_sched_cfs_bandwidth_slice,
> + .maxlen = sizeof(unsigned int),
> + .mode = 0644,
> + .proc_handler = proc_dointvec_minmax,
> + .extra1 = &one,
> + },
> +#endif
So this is basically your scalability knob.. the larger this value less
less frequent we have to access global state, but the less parallelism
is possible due to fewer CPUs depleting the total quota, leaving nothing
for the others.
I guess one could go try and play load-balancer games to try and
mitigate this by pulling this group's tasks to the CPU(s) that have move
bandwidth for that group, but balancing that against the regular
load-balancer goal of well balancing load, will undoubtedly be
'interesting'...
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists