lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <37E52D09333DE2469A03574C88DBF40F024EBE2F@pdsmsx414.ccr.corp.intel.com>
Date:	Mon, 18 Aug 2008 16:45:36 +0800
From:	"Zhang, Yanmin" <yanmin.zhang@...el.com>
To:	"Ingo Molnar" <mingo@...e.hu>
Cc:	<a.p.zijlstra@...llo.nl>,
	"Linux Kernel Mailing List" <linux-kernel@...r.kernel.org>
Subject: RE: scale sysctl_sched_shares_ratelimit with nr_cpus

>>-----Original Message-----
>>From: Ingo Molnar [mailto:mingo@...e.hu]
>>Sent: Monday, August 18, 2008 4:42 PM
>>To: Zhang, Yanmin
>>Cc: a.p.zijlstra@...llo.nl; Linux Kernel Mailing List
>>Subject: Re: scale sysctl_sched_shares_ratelimit with nr_cpus
>>
>>
>>* Zhang, Yanmin <yanmin.zhang@...el.com> wrote:
>>
>>> >>Does a scheduler trace show anything about why that drop happens?
Do
>>> >>something like this to trace the scheduler:
>>> >>
>>> >>assuming debugfs is mounted under /debug and
CONFIG_SCHED_TRACER=y:
>>> >>
>>> >>  echo 1 > /debug/tracing/tracing_cpumask
>>> >>  echo sched_switch > /debug/tracing/available_tracers
>>> >>  cat /debug/tracing/trace_pipe > trace.txt
>>> [YM] Thanks for your good pointer. I collected the data and didn't
find
>>> anything abnormal except the pid about waker.
>>>
>>>     Receiver-197-13665 [00]  1369.966423:  13665:120:R   +
13607:120:S
>>>     Receiver-197-13665 [00]  1369.966440:  13665:120:R   +
13611:120:S
>>>     Receiver-197-13665 [00]  1369.966458:  13665:120:R   +
13615:120:S
>>>     Receiver-197-13665 [00]  1369.966463:  13665:120:R   +
13619:120:S
>>>     Receiver-197-13665 [00]  1369.966466:  13665:120:R   +
13623:120:S
>>>     Receiver-197-13665 [00]  1369.966469:  13665:120:R   +
13627:120:S
>>>     Receiver-197-13665 [00]  1369.966475:  13665:120:R   +
13631:120:S
>>>     Receiver-197-13665 [00]  1369.966480:  13665:120:R   +
13635:120:S
>>>     Receiver-197-13665 [00]  1369.966485:  13665:120:R   +
13639:120:S
>>>     Receiver-197-13665 [00]  1369.966495:  13665:120:R   +
13643:120:S
>>>     Receiver-197-13665 [00]  1369.966507:  13871:120:R   +
13647:120:S
>>> Above waker pid is 13871 while the current pid is 13665. I found
lots of
>>> such mismatch data.
>>>
>>>     Receiver-197-13665 [00]  1369.966513:  13465:120:R   +
13651:120:S
>>>     Receiver-197-13665 [00]  1369.966516:  13665:120:R   +
13655:120:S
>>>     Receiver-197-13665 [00]  1369.966521:  13665:120:R   +
13659:120:S
>>>     Receiver-197-13665 [00]  1369.966530:  13665:120:R   +
13667:120:S
>>>     Receiver-197-13665 [00]  1369.966544:  13883:120:R   +
13663:120:S
>>>     Receiver-197-13665 [00]  1369.966549:  13665:120:R ==>
13667:120:R
>>>       Sender-140-13667 [00]  1369.966573:  13351:120:R   +
13668:120:S
>>>       Sender-140-13667 [00]  1369.966578:  13667:120:R ==>
13659:120:R
>>>
>>>
>>> BTW, I analyzed schedstat data and found wake_affine and
>>> load_balance_newidle seem abnormal. 2.6.27-rc has more task pulls. I
>>> set CONFIG_GROUP_SCHED=n with above testing.
>>
>>hm, does this mean there's too much idle time during the testrun,
>>because we dont load-balance agressively enough?
[YM] With 2.6.26, cpu idle is about 6%; with 2.6.27-rc, idle is about
0~1%.
It seems volanoMark prefers some idle. I diff the sched source codes and
couldn't
find why load balance pulls more tasks successfully in 2.6.27-rc.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ