[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180820122728.GM2960@e110439-lin>
Date: Mon, 20 Aug 2018 13:27:28 +0100
From: Patrick Bellasi <patrick.bellasi@....com>
To: Dietmar Eggemann <dietmar.eggemann@....com>
Cc: linux-kernel@...r.kernel.org, linux-pm@...r.kernel.org,
Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Tejun Heo <tj@...nel.org>,
"Rafael J . Wysocki" <rafael.j.wysocki@...el.com>,
Viresh Kumar <viresh.kumar@...aro.org>,
Vincent Guittot <vincent.guittot@...aro.org>,
Paul Turner <pjt@...gle.com>,
Morten Rasmussen <morten.rasmussen@....com>,
Juri Lelli <juri.lelli@...hat.com>,
Todd Kjos <tkjos@...gle.com>,
Joel Fernandes <joelaf@...gle.com>,
Steve Muckle <smuckle@...gle.com>,
Suren Baghdasaryan <surenb@...gle.com>
Subject: Re: [PATCH v3 12/14] sched/core: uclamp: add system default clamps
On 20-Aug 12:18, Dietmar Eggemann wrote:
> On 08/06/2018 06:39 PM, Patrick Bellasi wrote:
> >Clamp values cannot be tuned at the root cgroup level. Moreover, because
> >of the delegation model requirements and how the parent clamps
> >propagation works, if we want to enable subgroups to set a non null
> >util.min, we need to be able to configure the root group util.min to the
> >allow the maximum utilization (SCHED_CAPACITY_SCALE = 1024).
>
> Why 1024 (100)? Would any non 0 value work here?
Something less then 100% will clamp subgroups util.min to that value.
If we want to allow the full span to subgroups, the root group should
not enforce boundaries... hence util.min should be set to 100%.
> [...]
>
> >@@ -1269,6 +1296,75 @@ static inline void uclamp_group_get(struct task_struct *p,
> > uclamp_group_put(clamp_id, prev_group_id);
> > }
> >+int sched_uclamp_handler(struct ctl_table *table, int write,
> >+ void __user *buffer, size_t *lenp,
> >+ loff_t *ppos)
> >+{
> >+ int group_id[UCLAMP_CNT] = { UCLAMP_NOT_VALID };
> >+ struct uclamp_se *uc_se;
> >+ int old_min, old_max;
> >+ int result;
> >+
> >+ mutex_lock(&uclamp_mutex);
> >+
> >+ old_min = sysctl_sched_uclamp_util_min;
> >+ old_max = sysctl_sched_uclamp_util_max;
> >+
> >+ result = proc_dointvec(table, write, buffer, lenp, ppos);
> >+ if (result)
> >+ goto undo;
> >+ if (!write)
> >+ goto done;
> >+
> >+ if (sysctl_sched_uclamp_util_min > sysctl_sched_uclamp_util_max)
> >+ goto undo;
> >+ if (sysctl_sched_uclamp_util_max > 1024)
> >+ goto undo;
> >+
> >+ /* Find a valid group_id for each required clamp value */
> >+ if (old_min != sysctl_sched_uclamp_util_min) {
> >+ result = uclamp_group_find(UCLAMP_MIN, sysctl_sched_uclamp_util_min);
> >+ if (result == -ENOSPC) {
> >+ pr_err("Cannot allocate more than %d UTIL_MIN clamp groups\n",
> >+ CONFIG_UCLAMP_GROUPS_COUNT);
> >+ goto undo;
> >+ }
> >+ group_id[UCLAMP_MIN] = result;
> >+ }
> >+ if (old_max != sysctl_sched_uclamp_util_max) {
> >+ result = uclamp_group_find(UCLAMP_MAX, sysctl_sched_uclamp_util_max);
> >+ if (result == -ENOSPC) {
> >+ pr_err("Cannot allocate more than %d UTIL_MAX clamp groups\n",
> >+ CONFIG_UCLAMP_GROUPS_COUNT);
> >+ goto undo;
> >+ }
> >+ group_id[UCLAMP_MAX] = result;
> >+ }
> >+
> >+ /* Update each required clamp group */
> >+ if (old_min != sysctl_sched_uclamp_util_min) {
> >+ uc_se = &uclamp_default[UCLAMP_MIN];
> >+ uclamp_group_get(NULL, UCLAMP_MIN, group_id[UCLAMP_MIN],
> >+ uc_se, sysctl_sched_uclamp_util_min);
> >+ }
> >+ if (old_max != sysctl_sched_uclamp_util_max) {
> >+ uc_se = &uclamp_default[UCLAMP_MAX];
> >+ uclamp_group_get(NULL, UCLAMP_MAX, group_id[UCLAMP_MAX],
> >+ uc_se, sysctl_sched_uclamp_util_max);
> >+ }
> >+
> >+ if (result) {
> >+undo:
> >+ sysctl_sched_uclamp_util_min = old_min;
> >+ sysctl_sched_uclamp_util_max = old_max;
> >+ }
>
> This looks strange! In case uclamp_group_find() returns free_group_id
> instead of -ENOSPC, the sysctl min/max values are reset?
>
> I was under the assumption that I could specify:
>
> sysctl_sched_uclamp_util_min = 40 (for boosting)
> sysctl_sched_uclamp_util_max = 80 (for clamping)
>
> with an empty cpu controller hierarchy and then those values become the
> .effective values of (a first level) task group?
You right, I forgot to reset result=0 once we passed the two
uclamp_group_find() calls. Will fix in v4.
Cheers Patrick
--
#include <best/regards.h>
Patrick Bellasi
Powered by blists - more mailing lists