[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20200107101055.GX2844@hirez.programming.kicks-ass.net>
Date: Tue, 7 Jan 2020 11:10:55 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Qais Yousef <qais.yousef@....com>
Cc: Ingo Molnar <mingo@...nel.org>,
Vincent Guittot <vincent.guittot@...aro.org>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Tejun Heo <tj@...nel.org>, surenb@...gle.com,
Patrick Bellasi <patrick.bellasi@...bug.net>,
Doug Smythies <dsmythies@...us.net>,
Juri Lelli <juri.lelli@...hat.com>,
Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] sched/uclamp: Fix a bug in propagating uclamp value in
new cgroups
On Tue, Dec 24, 2019 at 11:54:04AM +0000, Qais Yousef wrote:
> When a new cgroup is created, the effective uclamp value wasn't updated
> with a call to cpu_util_update_eff() that looks at the hierarchy and
> update to the most restrictive values.
>
> Fix it by ensuring to call cpu_util_update_eff() when a new cgroup
> becomes online.
>
> Without this change, the newly created cgroup uses the default
> root_task_group uclamp values, which is 1024 for both uclamp_{min, max},
> which will cause the rq to to be clamped to max, hence cause the
> system to run at max frequency.
>
> The problem was observed on Ubuntu server and was reproduced on Debian
> and Buildroot rootfs.
>
> By default, Ubuntu and Debian create a cpu controller cgroup hierarchy
> and add all tasks to it - which creates enough noise to keep the rq
> uclamp value at max most of the time. Imitating this behavior makes the
> problem visible in Buildroot too which otherwise looks fine since it's a
> minimal userspace.
>
> Reported-by: Doug Smythies <dsmythies@...us.net>
> Tested-by: Doug Smythies <dsmythies@...us.net>
> Fixes: 0b60ba2dd342 ("sched/uclamp: Propagate parent clamps")
> Link: https://lore.kernel.org/lkml/000701d5b965$361b6c60$a2524520$@net/
> Signed-off-by: Qais Yousef <qais.yousef@....com>
Thanks!
Powered by blists - more mailing lists