lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150617190402.GD1244@intel.com>
Date:	Thu, 18 Jun 2015 03:04:02 +0800
From:	Yuyang Du <yuyang.du@...el.com>
To:	Boqun Feng <boqun.feng@...il.com>
Cc:	mingo@...nel.org, peterz@...radead.org,
	linux-kernel@...r.kernel.org, pjt@...gle.com, bsegall@...gle.com,
	morten.rasmussen@....com, vincent.guittot@...aro.org,
	dietmar.eggemann@....com, len.brown@...el.com,
	rafael.j.wysocki@...el.com, fengguang.wu@...el.com,
	srikar@...ux.vnet.ibm.com
Subject: Re: [Resend PATCH v8 0/4] sched: Rewrite runnable load and
 utilization average tracking

On Wed, Jun 17, 2015 at 09:06:17PM +0800, Boqun Feng wrote:
> 
> > So the problem is:
> > 
> > 1) The tasks in the workload have too small weight (only 79), because
> >    they share a task group.
> > 
> > 2) Probably some "high" weight task even runnable a small time
> >    contribute "big" to cfs_rq's load_avg.
> 
> Thank you for your analysis.
> 
> Some updates:
> 
> I created a task group /g and set /g/cpu.shares to 13312 (1024 * 13),
> and then ran `stress --cpu 12` and `dbench 1` simultaneously in that
> group. The situation is much better, only one CPU is not fully loaded,
> and its utilization rate stays around 85%.
> 

Hi,

That is good. You can as well disable autogroup, or "nicer" the autogroup,
or exec the dbench from another shell, etc...

Thank you for the tests. This may not be intuitive, but actually the results
showcased that:

1) the patchset improves the task group share management, accomplishes what it is
   said to be in terms of fair share, finally.

2) the seamlessly combined runnable + blocked load_avg improves the share
   of the sometimes runnable sometimes blocked tasks by preserving the blocked
   load in the avg, fairness is achieved as the dbench has the same weight as
   the 12 stress tasks, and the dbench (buried in CPU hogging tasks) performance
   is thus improved.

Peter?

In addition, to correct the util_avg odd value, the following patch should work.
Send it here before I send another version.

Thanks,
Yuyang

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index a8fd7b9..2b0907c 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -687,7 +687,7 @@ void init_entity_runnable_average(struct sched_entity *se)
 	sa->load_avg = scale_load_down(se->load.weight);
 	sa->load_sum = sa->load_avg * LOAD_AVG_MAX;
 	sa->util_avg = scale_load_down(SCHED_LOAD_SCALE);
-	sa->util_sum = sa->util_avg * LOAD_AVG_MAX;
+	sa->util_sum = LOAD_AVG_MAX;
 	/* when this task enqueue'ed, it will contribute to its cfs_rq's load_avg */
 }
 #else
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ