[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1236155511.2567.41.camel@ymzhang>
Date: Wed, 04 Mar 2009 16:31:51 +0800
From: "Zhang, Yanmin" <yanmin_zhang@...ux.intel.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Mel Gorman <mel@....ul.ie>, Lin Ming <ming.m.lin@...el.com>,
Pekka Enberg <penberg@...helsinki.fi>,
Linux Memory Management List <linux-mm@...ck.org>,
Rik van Riel <riel@...hat.com>,
KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
Christoph Lameter <cl@...ux-foundation.org>,
Johannes Weiner <hannes@...xchg.org>,
Nick Piggin <npiggin@...e.de>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Ingo Molnar <mingo@...e.hu>
Subject: Re: [RFC PATCH 00/19] Cleanup and optimise the page allocator V2
On Wed, 2009-03-04 at 08:23 +0100, Peter Zijlstra wrote:
> On Wed, 2009-03-04 at 10:05 +0800, Zhang, Yanmin wrote:
> > FAIR_GROUP_SCHED is a feature to support configurable cpu weight for different users.
> > We did find it takes lots of time to check/update the share weight which might create
> > lots of cache ping-pang. With sysbench(oltp)+mysql, that becomes more severe because
> > mysql runs as user mysql and sysbench runs as another regular user. When starting
> > the testing with 1 thread in command line, there are 2 mysql threads and 1 sysbench
> > thread are proactive.
>
> cgroup based group scheduling doesn't bother with users. So unless you
> create sched-cgroups your should all be in the same (root) group.
I disable CGROUP, but enable GROUP_SCHED and USER_SCHED. My config inherits from old config
files.
CONFIG_GROUP_SCHED=y
CONFIG_FAIR_GROUP_SCHED=y
# CONFIG_RT_GROUP_SCHED is not set
CONFIG_USER_SCHED=y
# CONFIG_CGROUP_SCHED is not set
I check defconfig on x86-64 of 2.6.28 and it does enable CGROUP and disable USER_SCHED.
Perhaps I need change my latest config file to the default on sched options.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists