lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-Id: <20231201161652.1241695-2-vincent.guittot@linaro.org> Date: Fri, 1 Dec 2023 17:16:51 +0100 From: Vincent Guittot <vincent.guittot@...aro.org> To: mingo@...hat.com, peterz@...radead.org, juri.lelli@...hat.com, dietmar.eggemann@....com, rostedt@...dmis.org, bsegall@...gle.com, mgorman@...e.de, bristot@...hat.com, vschneid@...hat.com, corbet@....net, alexs@...nel.org, siyanteng@...ngson.cn, qyousef@...alina.io, linux-kernel@...r.kernel.org, linux-doc@...r.kernel.org Cc: lukasz.luba@....com, hongyan.xia2@....com, yizhou.tang@...pee.com, Vincent Guittot <vincent.guittot@...aro.org> Subject: [PATCH v2 1/2] sched/fair: Remove SCHED_FEAT(UTIL_EST_FASTUP, true) sched_feat(UTIL_EST_FASTUP) has been added to easily disable the feature in order to check for possibly related regressions. After 3 years, it has never been used and no regression has been reported. Let remove it and make fast increase a permanent behavior. Signed-off-by: Vincent Guittot <vincent.guittot@...aro.org> Reviewed-and-tested-by: Lukasz Luba <lukasz.luba@....com> Reviewed-by: Dietmar Eggemann <dietmar.eggemann@....com> Reviewed-by: Hongyan Xia <hongyan.xia2@....com> Reviewed-by: Tang Yizhou <yizhou.tang@...pee.com> --- Documentation/scheduler/schedutil.rst | 7 +++---- Documentation/translations/zh_CN/scheduler/schedutil.rst | 7 +++---- kernel/sched/fair.c | 8 +++----- kernel/sched/features.h | 1 - 4 files changed, 9 insertions(+), 14 deletions(-) diff --git a/Documentation/scheduler/schedutil.rst b/Documentation/scheduler/schedutil.rst index 32c7d69fc86c..803fba8fc714 100644 --- a/Documentation/scheduler/schedutil.rst +++ b/Documentation/scheduler/schedutil.rst @@ -90,8 +90,8 @@ For more detail see: - Documentation/scheduler/sched-capacity.rst:"1. CPU Capacity + 2. Task utilization" -UTIL_EST / UTIL_EST_FASTUP -========================== +UTIL_EST +======== Because periodic tasks have their averages decayed while they sleep, even though when running their expected utilization will be the same, they suffer a @@ -99,8 +99,7 @@ though when running their expected utilization will be the same, they suffer a To alleviate this (a default enabled option) UTIL_EST drives an Infinite Impulse Response (IIR) EWMA with the 'running' value on dequeue -- when it is -highest. A further default enabled option UTIL_EST_FASTUP modifies the IIR -filter to instantly increase and only decay on decrease. +highest. UTIL_EST filters to instantly increase and only decay on decrease. A further runqueue wide sum (of runnable tasks) is maintained of: diff --git a/Documentation/translations/zh_CN/scheduler/schedutil.rst b/Documentation/translations/zh_CN/scheduler/schedutil.rst index d1ea68007520..7c8d87f21c42 100644 --- a/Documentation/translations/zh_CN/scheduler/schedutil.rst +++ b/Documentation/translations/zh_CN/scheduler/schedutil.rst @@ -89,16 +89,15 @@ r_cpu被定义为当前CPU的最高性能水平与系统中任何其它CPU的最 - Documentation/translations/zh_CN/scheduler/sched-capacity.rst:"1. CPU Capacity + 2. Task utilization" -UTIL_EST / UTIL_EST_FASTUP -========================== +UTIL_EST +======== 由于周期性任务的平均数在睡眠时会衰减,而在运行时其预期利用率会和睡眠前相同, 因此它们在再次运行后会面临(DVFS)的上涨。 为了缓解这个问题,(一个默认使能的编译选项)UTIL_EST驱动一个无限脉冲响应 (Infinite Impulse Response,IIR)的EWMA,“运行”值在出队时是最高的。 -另一个默认使能的编译选项UTIL_EST_FASTUP修改了IIR滤波器,使其允许立即增加, -仅在利用率下降时衰减。 +UTIL_EST滤波使其在遇到更高值时立刻增加,而遇到低值时会缓慢衰减。 进一步,运行队列的(可运行任务的)利用率之和由下式计算: diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index bcea3d55d95d..e94d65da8d66 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -4870,11 +4870,9 @@ static inline void util_est_update(struct cfs_rq *cfs_rq, * to smooth utilization decreases. */ ue.enqueued = task_util(p); - if (sched_feat(UTIL_EST_FASTUP)) { - if (ue.ewma < ue.enqueued) { - ue.ewma = ue.enqueued; - goto done; - } + if (ue.ewma < ue.enqueued) { + ue.ewma = ue.enqueued; + goto done; } /* diff --git a/kernel/sched/features.h b/kernel/sched/features.h index a3ddf84de430..143f55df890b 100644 --- a/kernel/sched/features.h +++ b/kernel/sched/features.h @@ -83,7 +83,6 @@ SCHED_FEAT(WA_BIAS, true) * UtilEstimation. Use estimated CPU utilization. */ SCHED_FEAT(UTIL_EST, true) -SCHED_FEAT(UTIL_EST_FASTUP, true) SCHED_FEAT(LATENCY_WARN, false) -- 2.34.1
Powered by blists - more mailing lists