[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110912101722.GA28950@linux.vnet.ibm.com>
Date: Mon, 12 Sep 2011 15:47:22 +0530
From: Srivatsa Vaddagiri <vatsa@...ux.vnet.ibm.com>
To: Peter Zijlstra <a.p.zijlstra@...llo.nl>
Cc: Paul Turner <pjt@...gle.com>,
Kamalesh Babulal <kamalesh@...ux.vnet.ibm.com>,
Vladimir Davydov <vdavydov@...allels.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Bharata B Rao <bharata@...ux.vnet.ibm.com>,
Dhaval Giani <dhaval.giani@...il.com>,
Vaidyanathan Srinivasan <svaidy@...ux.vnet.ibm.com>,
Ingo Molnar <mingo@...e.hu>,
Pavel Emelianov <xemul@...allels.com>
Subject: Re: CFS Bandwidth Control - Test results of cgroups tasks pinned vs
unpinnede
* Peter Zijlstra <a.p.zijlstra@...llo.nl> [2011-09-09 14:31:02]:
> > Machine : 16-cpus (2 Quad-core w/ HT enabled)
> > Cgroups : 5 in number (C1-C5), each having {2, 2, 4, 8, 16} tasks respectively.
> > Further, each task is placed in its own (sub-)cgroup with
> > a capped usage of 50% CPU.
>
> So that's loads: {512,512}, {512,512}, {256,256,256,256}, {128,..} and {64,..}
Yes, with the default shares of 1024 for each cgroup.
FWIW we did also try setting shares for each cgroup proportional to number of
tasks it has. For ex: C1's shares = 1024 * 2 = 2048, C2 = 1024 * 2 = 2048,
C3 = 4 * 1024 = 4096 etc. while /C1/C1_1, /C1/C1_2, .../C5/C5_16/ shares were
left at default of 1024 (as those sub-cgroups contain only one task).
That does help reduce idle time by almost 50% (from 15-20% -> 6-9%)
- vatsa
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists