lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <BLU437-SMTP26FF0C83CB1EA7DECF2FAC80A50@phx.gbl>
Date:	Thu, 18 Jun 2015 14:31:00 +0800
From:	Wanpeng Li <wanpeng.li@...mail.com>
To:	Yuyang Du <yuyang.du@...el.com>, Boqun Feng <boqun.feng@...il.com>
CC:	mingo@...nel.org, peterz@...radead.org,
	linux-kernel@...r.kernel.org, pjt@...gle.com, bsegall@...gle.com,
	morten.rasmussen@....com, vincent.guittot@...aro.org,
	dietmar.eggemann@....com, len.brown@...el.com,
	rafael.j.wysocki@...el.com, fengguang.wu@...el.com,
	srikar@...ux.vnet.ibm.com
Subject: Re: [Resend PATCH v8 0/4] sched: Rewrite runnable load and utilization
 average tracking


On 6/17/15 11:11 AM, Yuyang Du wrote:
> Hi,
>
> The sched_debug is informative, lets first give it some analysis.
>
> The workload is 12 CPU hogging tasks (always runnable) and 1 dbench
> task doing fs ops (70% runnable) running at the same time.
>
> Actually, these 13 tasks are in a task group /autogroup-9617, which
> has weight 1024.
>
> So the 13 tasks at most can contribute to an average of 79 (=1024/13)
> to the group entity's load_avg:
>
> cfs_rq[0]:/autogroup-9617
> .se->load.weight               : 2
> .se->avg.load_avg              : 0
>
> cfs_rq[1]:/autogroup-9617
> .se->load.weight               : 80
> .se->avg.load_avg              : 79
>
> cfs_rq[2]:/autogroup-9617
> .se->load.weight               : 79
> .se->avg.load_avg              : 78
>
> cfs_rq[3]:/autogroup-9617
> .se->load.weight               : 80
> .se->avg.load_avg              : 81
>
> cfs_rq[4]:/autogroup-9617
> .se->load.weight               : 80
> .se->avg.load_avg              : 79
>
> cfs_rq[5]:/autogroup-9617
> .se->load.weight               : 79
> .se->avg.load_avg              : 77
>
> cfs_rq[6]:/autogroup-9617
> .se->load.weight               : 159
> .se->avg.load_avg              : 156
>
> cfs_rq[7]:/autogroup-9617
> .se->load.weight               : 64  (dbench)
> .se->avg.load_avg              : 50

How you figure out this one is dbench?

Regards,
Wanpeng Li

>
> cfs_rq[8]:/autogroup-9617
> .se->load.weight               : 80
> .se->avg.load_avg              : 78
>
> cfs_rq[9]:/autogroup-9617
> .se->load.weight               : 159
> .se->avg.load_avg              : 156
>
> cfs_rq[10]:/autogroup-9617
> .se->load.weight               : 80
> .se->avg.load_avg              : 78
>
> cfs_rq[11]:/autogroup-9617
> .se->load.weight               : 79
> .se->avg.load_avg              : 78
>
> So this is very good runnable load avg accrued in the task group
> structure.
>
> However, why the cpu0 is very underload?
>
> The top cfs's load_avg is:
>
> cfs_rq[0]: 754
> cfs_rq[1]: 81
> cfs_rq[2]: 85
> cfs_rq[3]: 80
> cfs_rq[4]: 142
> cfs_rq[5]: 86
> cfs_rq[6]: 159
> cfs_rq[7]: 264
> cfs_rq[8]: 79
> cfs_rq[9]: 156
> cfs_rq[10]: 78
> cfs_rq[11]: 79
>
> We see cfs_rq[0]'s load_avg is 754 even it is underloaded.
>
> So the problem is:
>
> 1) The tasks in the workload have too small weight (only 79), because
>     they share a task group.
>
> 2) Probably some "high" weight task even runnable a small time
>     contribute "big" to cfs_rq's load_avg.
>
> The patchset does what it wants to do:
>
> 1) very precise task group's load avg tracking from group to children
>     tasks and from children tasks to group.
>
> 2) the combined runnable + blocked load_avg is effective, so the blocked
>     avg made its impact.
>
> I will try to figure out what makes the cfs_rq[0]'s 754 load_avg, but
> I also think that the tasks have so small weight that they are very
> easy to be fairly "imbalanced" ....
>
> Peter, Ben, and others?
>
> In addition, the util_avg sometimes is insanely big, I think I already
> found the problem.
>
> Thanks,
> Yuyang
>
> On Wed, Jun 17, 2015 at 01:15:01PM +0800, Boqun Feng wrote:
>> On Wed, Jun 17, 2015 at 11:06:50AM +0800, Boqun Feng wrote:
>>> Hi Yuyang,
>>>
>>> I've run the test as follow on tip/master without and with your
>>> patchset:
>>>
>>> On a 12-core system (Intel(R) Xeon(R) CPU X5690 @ 3.47GHz)
>>> run stress --cpu 12
>>> run dbench 1
>> Sorry, I forget to say that `stress --cpu 12` and `dbench 1` are running
>> simultaneously. Thank Yuyang for reminding me that.
>>
>> Regards,
>> Boqun
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@...r.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ