[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170215113215.GU6500@twins.programming.kicks-ass.net>
Date: Wed, 15 Feb 2017 12:32:15 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Juri Lelli <juri.lelli@....com>
Cc: Luca Abeni <luca.abeni@...tannapisa.it>,
Steven Rostedt <rostedt@...dmis.org>,
linux-kernel@...r.kernel.org, Ingo Molnar <mingo@...hat.com>,
Daniel Bristot de Oliveira <bristot@...hat.com>,
Tommaso Cucinotta <tommaso.cucinotta@...up.it>,
Mike Galbraith <efault@....de>,
Romulo Silva de Oliveira <romulo.deoliveira@...c.br>
Subject: Re: [PATCH 3/2] sched/deadline: Use deadline instead of period when
calculating overflow
On Wed, Feb 15, 2017 at 10:29:19AM +0000, Juri Lelli wrote:
> that we then dediced not to propose since (note that these are just my
> memories of the dicussion, so everything it's up for further discussion,
> also in light of the problem highlighted by Daniel)
>
> - SCHED_DEADLINE, as the documentation says, does AC using utilization
> - it is however true that a sufficient (but not necessary) test on UP for
> D_i != P_i cases is the one of my patch above
> - we have agreed in the past that the kernel should only check that we
> don't cause "overload" in the system (which is still the case if we
> consider utilizations), not "hard schedulability"
> - also because on SMP systems "sum(WCET_i / min{D_i, P_i}) <= M"
> doesn't guarantee much more than the test base on P_i only (there not
> seem to be many/any papers around considering the D_i != P_i case on
> SMP actually)
> - basically the patch above would only matter for the UP/partitioned
> cases
>
> Thoughts?
I think that this makes sense. Keep the kernel AC such that the working
set is 'recoverable' was I think the word Tommaso used last time.
I've been meaning to play with his suggested AC for arbitrary affinities
but haven't managed to find time yet. My biggest attraction to that is
that it would allow de-coupling it from the root_domain thingy and
side-step the problems we currently have with that.
Powered by blists - more mailing lists