[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d8a3d533-84df-bfc7-96ae-790141be4926@infradead.org>
Date: Mon, 6 Aug 2018 09:50:20 -0700
From: Randy Dunlap <rdunlap@...radead.org>
To: Patrick Bellasi <patrick.bellasi@....com>,
linux-kernel@...r.kernel.org, linux-pm@...r.kernel.org
Cc: Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Tejun Heo <tj@...nel.org>,
"Rafael J . Wysocki" <rafael.j.wysocki@...el.com>,
Viresh Kumar <viresh.kumar@...aro.org>,
Vincent Guittot <vincent.guittot@...aro.org>,
Paul Turner <pjt@...gle.com>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Morten Rasmussen <morten.rasmussen@....com>,
Juri Lelli <juri.lelli@...hat.com>,
Todd Kjos <tkjos@...gle.com>,
Joel Fernandes <joelaf@...gle.com>,
Steve Muckle <smuckle@...gle.com>,
Suren Baghdasaryan <surenb@...gle.com>
Subject: Re: [PATCH v3 01/14] sched/core: uclamp: extend sched_setattr to
support utilization clamping
Hi,
On 08/06/2018 09:39 AM, Patrick Bellasi wrote:
> diff --git a/init/Kconfig b/init/Kconfig
> index 041f3a022122..1d45a6877d6f 100644
> --- a/init/Kconfig
> +++ b/init/Kconfig
> @@ -583,6 +583,25 @@ config HAVE_UNSTABLE_SCHED_CLOCK
> config GENERIC_SCHED_CLOCK
> bool
>
> +menu "Scheduler features"
> +
> +config UCLAMP_TASK
> + bool "Enable utilization clamping for RT/FAIR tasks"
> + depends on CPU_FREQ_GOV_SCHEDUTIL
> + default false
default n
but just omit the line completely since "n" is already the default.
> + help
> + This feature enables the scheduler to track the clamped utilization
> + of each CPU based on RUNNABLE tasks currently scheduled on that CPU.
> +
> + When this option is enabled, the user can specify a min and max CPU
> + bandwidth which is allowed for a task.
> + The max bandwidth allows to clamp the maximum frequency a task can
> + use, while the min bandwidth allows to define a minimum frequency a
> + task will always use.
Please clean up the indentation above to use one tab + 2 spaces on all lines.
> +
> + If in doubt, say N.
> +
> +endmenu
thanks,
--
~Randy
Powered by blists - more mailing lists