[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20131014133638.GA26319@gmail.com>
Date: Mon, 14 Oct 2013 15:36:38 +0200
From: Ingo Molnar <mingo@...nel.org>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Juri Lelli <juri.lelli@...il.com>, tglx@...utronix.de,
mingo@...hat.com, rostedt@...dmis.org, oleg@...hat.com,
fweisbec@...il.com, darren@...art.com, johan.eker@...csson.com,
p.faure@...tech.ch, linux-kernel@...r.kernel.org,
claudio@...dence.eu.com, michael@...rulasolutions.com,
fchecconi@...il.com, tommaso.cucinotta@...up.it,
nicola.manica@...i.unitn.it, luca.abeni@...tn.it,
dhaval.giani@...il.com, hgu1972@...il.com,
paulmck@...ux.vnet.ibm.com, raistlin@...ux.it,
insop.song@...il.com, liming.wang@...driver.com, jkacur@...hat.com,
harald.gustafsson@...csson.com, vincent.guittot@...aro.org,
bruce.ashfield@...driver.com
Subject: Re: [PATCH 00/14] sched: SCHED_DEADLINE v8
* Peter Zijlstra <peterz@...radead.org> wrote:
> On Mon, Oct 14, 2013 at 02:38:55PM +0200, Ingo Molnar wrote:
>
> > > [...] the only 'issue' I have is the cgroup abi muck. We clearly
> > > need a bit more discussion on what/how we want things there but
> > > there are no easy answers :/ So I'd say lets try this and see where
> > > we'll find ourselves.
> >
> > I'd suggest we leave out the cgroup ABI muck from the first round of
> > upstream merge - do it in a second round, that will give it more
> > attention.
>
> I'm afraid that'd give rise to some very weird situations for people
> using cgroups :/
Why? One solution would be to just not offer bandwidth management
initially - but use some sane default.
Yes, this doesn't offer "true" deadline scheduling yet, but would allow us
to move most of the code upstream, without any ABI changes initially
(other than adding the SCHED_DEADLINE policy and such).
Thanks,
Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists