[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1315830943.26517.36.camel@twins>
Date: Mon, 12 Sep 2011 14:35:43 +0200
From: Peter Zijlstra <a.p.zijlstra@...llo.nl>
To: Srivatsa Vaddagiri <vatsa@...ux.vnet.ibm.com>
Cc: Paul Turner <pjt@...gle.com>,
Kamalesh Babulal <kamalesh@...ux.vnet.ibm.com>,
Vladimir Davydov <vdavydov@...allels.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Bharata B Rao <bharata@...ux.vnet.ibm.com>,
Dhaval Giani <dhaval.giani@...il.com>,
Vaidyanathan Srinivasan <svaidy@...ux.vnet.ibm.com>,
Ingo Molnar <mingo@...e.hu>,
Pavel Emelianov <xemul@...allels.com>
Subject: Re: CFS Bandwidth Control - Test results of cgroups tasks pinned vs
unpinnede
On Mon, 2011-09-12 at 15:47 +0530, Srivatsa Vaddagiri wrote:
> * Peter Zijlstra <a.p.zijlstra@...llo.nl> [2011-09-09 14:31:02]:
>
> > > Machine : 16-cpus (2 Quad-core w/ HT enabled)
> > > Cgroups : 5 in number (C1-C5), each having {2, 2, 4, 8, 16} tasks respectively.
> > > Further, each task is placed in its own (sub-)cgroup with
> > > a capped usage of 50% CPU.
> >
> > So that's loads: {512,512}, {512,512}, {256,256,256,256}, {128,..} and {64,..}
>
> Yes, with the default shares of 1024 for each cgroup.
>
> FWIW we did also try setting shares for each cgroup proportional to number of
> tasks it has. For ex: C1's shares = 1024 * 2 = 2048, C2 = 1024 * 2 = 2048,
> C3 = 4 * 1024 = 4096 etc. while /C1/C1_1, /C1/C1_2, .../C5/C5_16/ shares were
> left at default of 1024 (as those sub-cgroups contain only one task).
>
> That does help reduce idle time by almost 50% (from 15-20% -> 6-9%)
Of course it does.. and I bet you can improve that slightly if you
manage to fix some of the numerical nightmares that live in the cgroup
load-balancer (Paul, care to share your WIP?)
But the initial scenario is a complete and utter fail, its impossible to
schedule that sanely. Its an infeasible weight scenario with more tasks
than cpus, and the added bandwidth constraints just keep changing the
set requiring endless migrations to try and keep utilization from
tanking.
Really, classic fail.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists