[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1290167376.2109.1553.camel@laptop>
Date: Fri, 19 Nov 2010 12:49:36 +0100
From: Peter Zijlstra <a.p.zijlstra@...llo.nl>
To: Samuel Thibault <samuel.thibault@...-lyon.org>
Cc: Mike Galbraith <efault@....de>, Hans-Peter Jansen <hpj@...la.net>,
linux-kernel@...r.kernel.org,
Lennart Poettering <mzxreary@...inter.de>,
Linus Torvalds <torvalds@...ux-foundation.org>, david@...g.hm,
Dhaval Giani <dhaval.giani@...il.com>,
Vivek Goyal <vgoyal@...hat.com>,
Oleg Nesterov <oleg@...hat.com>,
Markus Trippelsdorf <markus@...ppelsdorf.de>,
Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
Ingo Molnar <mingo@...e.hu>,
Balbir Singh <balbir@...ux.vnet.ibm.com>
Subject: Re: [RFC/RFT PATCH v3] sched: automated per tty task groups
On Fri, 2010-11-19 at 00:43 +0100, Samuel Thibault wrote:
> What overhead? The implementation of cgroups is actually already
> hierarchical.
It must be nice to be that ignorant ;-) Speaking for the scheduler
cgroup controller (that being the only one I actually know), most all
the load-balance operations are O(n) in the number of active cgroups,
and a lot of the cpu local schedule operations are O(d) where d is the
depth of the cgroup tree.
[ and that's with the .38 targeted code, current mainline is O(n ln(n))
for load balancing and truly sucks on multi-socket ]
You add a lot of pointer chasing to all the scheduler fast paths and
there is quite significant data size bloat for even compiling with the
controller enabled, let alone actually using the stuff.
But sure, treat them as if they were free to use, I guess your machine
is fast enough.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists