[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3d3306e4-3a78-5322-df69-7665cf01cc43@arm.com>
Date: Thu, 5 Sep 2019 15:48:56 +0100
From: Valentin Schneider <valentin.schneider@....com>
To: Patrick Bellasi <patrick.bellasi@....com>
Cc: Peter Zijlstra <peterz@...radead.org>,
Subhra Mazumdar <subhra.mazumdar@...cle.com>,
linux-kernel@...r.kernel.org, mingo@...hat.com, tglx@...utronix.de,
steven.sistare@...cle.com, dhaval.giani@...cle.com,
daniel.lezcano@...aro.org, vincent.guittot@...aro.org,
viresh.kumar@...aro.org, tim.c.chen@...ux.intel.com,
mgorman@...hsingularity.net, parth@...ux.ibm.com
Subject: Re: [RFC PATCH 1/9] sched,cgroup: Add interface for latency-nice
On 05/09/2019 14:07, Patrick Bellasi wrote:
> Yep, I think so fare we are all converging towards the idea to use the
> a signed range. Regarding the range itself, yes: 1024 looks very
> oversized, but +-20 is still something which leave room for a bit of
> flexibility and it also better matches the idea that we don't want to
> "enumerate behaviours" but just expose a knob. To map certain "bias" we
> could benefit from a slightly larger range.
>
>> AFAICT this RFC only looks at wakeups, but I guess latency-nice can be
>
> For the wakeup path there is also the TurboSched proposal by Parth:
>
> Message-ID: <20190725070857.6639-1-parth@...ux.ibm.com>
> https://lore.kernel.org/lkml/20190725070857.6639-1-parth@linux.ibm.com/
>
> we should keep in mind.
>
>> applied elsewhere (e.g. load-balance, something like task_hot() and its
>> use of sysctl_sched_migration_cost).
>
> For LB can you come up with some better description of what usages you
> see could benefit from a "per task" or "per task-group" latency niceness?
>
task_hot() "ratelimits" migrations of tasks that were running up until
very recently (hence "cache hot"), but the knob is system wide. It might
make sense to exploit a per-task attribute to tune this (e.g. go wild if
not latency sensitive, otherwise stay away for longer).
We could perhaps even apply it to active load balance to similarly stay
away from latency sensitive tasks. Right now this is gated by a
sched_domain-wide attribute (nr_balance_failed). We could tweak this to
requiring more (less) failed attempts before interrupting latency (in)
sensitive tasks.
I'm sure we can come up with even more creative ways to pour even more
heuristics in there ;)
> Best,
> Patrick
>
Powered by blists - more mailing lists