[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20110225102319.6c776838@putvin>
Date: Fri, 25 Feb 2011 10:23:19 -0800
From: Jacob Pan <jacob.jun.pan@...ux.intel.com>
To: Paul Menage <menage@...gle.com>
Cc: Matt Helsley <matthltc@...ibm.com>,
"Rafael J. Wysocki" <rjw@...k.pl>,
LKML <linux-kernel@...r.kernel.org>,
"Kirill A. Shutemov" <kirill@...temov.name>,
Arjan van de Ven <arjan@...ux.intel.com>,
container cgroup <containers@...ts.linux-foundation.org>,
Li Zefan <lizf@...fujitsu.com>, akpm@...ux-foundation.org,
rdunlap@...otime.net, Cedric Le Goater <clg@...t.ibm.com>,
Linux PM mailing list <linux-pm@...ts.linux-foundation.org>
Subject: Re: [PATCH 1/1, v9] cgroup/freezer: add per freezer duty ratio
control
On Fri, 25 Feb 2011 09:53:59 -0800
Paul Menage <menage@...gle.com> wrote:
> On Thu, Feb 24, 2011 at 3:45 PM, jacob pan
> <jacob.jun.pan@...ux.intel.com> wrote:
> > I played with v3 and v4 of the CFS bandwidth patch. When the cpu
> > cgroup exceeds its cfs_quota, it does have the same effect as this
> > patch in terms of freeze/thaw at given period and allowed runtime.
> > But when the cgroup cpu usage is below cfs_quota, it is not
> > throttled. Therefore, it cannot reduce wakeups.
>
> How about a userspace daemon that periodically flips the CPU quota for
> the cgroup between zero and the group's runnable level? Wouldn't that
> achieve what you need pretty easily without having to introduce
> additional complexity and threads into the kernel?
I think it should work but with little bit more overhead than
doing the same in the kernel. It will also need one periodic timer per
cgroup. Two extra timer wake ups for each time slice to run.
Thanks for the great suggestion. I will do some experiment with it.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists