lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <1604632923-4243-1-git-send-email-xuewen.yan@unisoc.com>
Date:   Fri,  6 Nov 2020 11:22:03 +0800
From:   Xuewen Yan <xuewen.yan94@...il.com>
To:     mingo@...hat.com, peterz@...radead.org, juri.lelli@...hat.com,
        vincent.guittot@...aro.org
Cc:     dietmar.eggemann@....com, rostedt@...dmis.org, bsegall@...gle.com,
        mgorman@...e.de, bristot@...hat.com, linux-kernel@...r.kernel.org,
        xuewen.yan@...soc.com, xuewyan@...mail.com
Subject: [PATCH v3] sched: revise the initial value of the util_avg.

According to the original code logic:
		cfs_rq->avg.util_avg
sa->util_avg  = -------------------- * se->load.weight
		cfs_rq->avg.load_avg
but for fair_sched_class in 64bits platform:
se->load.weight = 1024 * sched_prio_to_weight[prio];
	cfs_rq->avg.util_avg
so the  -------------------- must be extremely small, the
	cfs_rq->avg.load_avg
judgment condition "sa->util_avg < cap" could be established.
It's not fair for those tasks who has smaller nice value.

Signed-off-by: Xuewen Yan <xuewen.yan@...soc.com>
---
changes since V2:

*kernel/sched/fair.c | 6 +++++-
* 1 file changed, 5 insertions(+), 1 deletion(-)
*
*diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
*index 290f9e3..079760b 100644
*--- a/kernel/sched/fair.c
*+++ b/kernel/sched/fair.c
*@@ -794,7 +794,11 @@ void post_init_entity_util_avg(struct task_struct *p)
*
*        if (cap > 0) {
*                if (cfs_rq->avg.util_avg != 0) {
*-                       sa->util_avg  = cfs_rq->avg.util_avg * se->load.weight;
*+                       if (p->sched_class == &fair_sched_class)
*+                               sa->util_avg  = cfs_rq->avg.util_avg * se_weight(se);
*+                       else
*+                               sa->util_avg  = cfs_rq->avg.util_avg * se->load.weight;
*+
*                        sa->util_avg /= (cfs_rq->avg.load_avg + 1);
*
*                        if (sa->util_avg > cap)
*
---
comment from Vincent Guittot <vincent.guittot@...aro.org>:
>
> According to the original code logic:
>                 cfs_rq->avg.util_avg
> sa->util_avg  = -------------------- * se->load.weight
>                 cfs_rq->avg.load_avg

this should have been scale_load_down(se->load.weight) from the beginning

> but for fair_sched_class:
> se->load.weight = 1024 * sched_prio_to_weight[prio];

This is only true for 64bits platform otherwise scale_load and
scale_load_down are nop

>         cfs_rq->avg.util_avg
> so the  -------------------- must be extremely small, the
>         cfs_rq->avg.load_avg
> judgment condition "sa->util_avg < cap" could be established.
> It's not fair for those tasks who has smaller nice value.
>
> Signed-off-by: Xuewen Yan <xuewen.yan@...soc.com>
> ---
>  kernel/sched/fair.c | 6 +++++-
>  1 file changed, 5 insertions(+), 1 deletion(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 290f9e3..079760b 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -794,7 +794,11 @@ void post_init_entity_util_avg(struct task_struct *p)
>
>         if (cap > 0) {
>                 if (cfs_rq->avg.util_avg != 0) {

We should now use cpu_util() instead of cfs_rq->avg.util_avg which
takes into account other classes

> -                       sa->util_avg  = cfs_rq->avg.util_avg * se->load.weight;
> +                       if (p->sched_class == &fair_sched_class)
> +                               sa->util_avg  = cfs_rq->avg.util_avg * se_weight(se);
> +                       else
> +                               sa->util_avg  = cfs_rq->avg.util_avg * se->load.weight;

Why this else keeps using se->load.weight ?

Either we uses sa->util_avg  = cfs_rq->avg.util_avg * se_weight(se);
for all classes

Or we want a different init value for other classes. But in this case
se->load.weight is meaningless and we should simply set them to 0
although we could probably compute a value based on bandwidth for
deadline class.

---
 kernel/sched/fair.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 290f9e3..c6186cc 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -794,7 +794,7 @@ void post_init_entity_util_avg(struct task_struct *p)
 
 	if (cap > 0) {
 		if (cfs_rq->avg.util_avg != 0) {
-			sa->util_avg  = cfs_rq->avg.util_avg * se->load.weight;
+			sa->util_avg  = cfs_rq->avg.util_avg * se_weight(se);
 			sa->util_avg /= (cfs_rq->avg.load_avg + 1);
 
 			if (sa->util_avg > cap)
-- 
1.9.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ