lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <50FD005C.8040402@linux.vnet.ibm.com>
Date:	Mon, 21 Jan 2013 16:46:20 +0800
From:	Michael Wang <wangyun@...ux.vnet.ibm.com>
To:	Mike Galbraith <bitbucket@...ine.de>
CC:	linux-kernel@...r.kernel.org, mingo@...hat.com,
	peterz@...radead.org, mingo@...nel.org, a.p.zijlstra@...llo.nl
Subject: Re: [RFC PATCH 0/2] sched: simplify the select_task_rq_fair()

On 01/21/2013 04:26 PM, Mike Galbraith wrote:
> On Mon, 2013-01-21 at 15:34 +0800, Michael Wang wrote: 
>> On 01/21/2013 02:42 PM, Mike Galbraith wrote:
>>> On Mon, 2013-01-21 at 13:07 +0800, Michael Wang wrote:
>>>
>>>> That seems like the default one, could you please show me the numbers in
>>>> your datapoint file?
>>>
>>> Yup, I do not touch the workfile.  Datapoints is what you see in the
>>> tabulated result...
>>>
>>> 1
>>> 1
>>> 1
>>> 5
>>> 5
>>> 5
>>> 10
>>> 10
>>> 10
>>> ...
>>>
>>> so it does three consecutive runs at each load level.  I quiesce the
>>> box, set governor to performance, echo 250 32000 32 4096
>>>> /proc/sys/kernel/sem, then ./multitask -nl -f, and point it
>>> at ./datapoints.
>>
>> I have changed the "/proc/sys/kernel/sem" to:
>>
>> 2000    2048000 256     1024
>>
>> and run few rounds, seems like I can't reproduce this issue on my 12 cpu
>> X86 server:
>>
>> 	prev		post
>> Tasks    jobs/min  	jobs/min
>>     1      508.39    	506.69
>>     5     2792.63   	2792.63
>>    10     5454.55   	5449.64
>>    20    10262.49  	10271.19
>>    40    18089.55  	18184.55
>>    80    28995.22  	28960.57
>>   160    41365.19  	41613.73
>>   320    53099.67  	52767.35
>>   640    61308.88  	61483.83
>>  1280    66707.95  	66484.96
>>  2560    69736.58  	69350.02
>>
>> Almost nothing changed...I would like to find another machine and do the
>> test again later.
> 
> Hm.  Those numbers look odd.  Ok, I've got 8 more cores, but your hefty
> load throughput is low.  When I look low end numbers, seems your cores
> are more macho than my 2.27 GHz EX cores, so it should have been a lot
> closer.  Oh wait, you said "12 cpu".. so 1 6 core package + HT?  This
> box is 2 NUMA nodes (was 4), 2 (was 4) 10 core packages + HT.

It's a 12 core package, and only 1 physical cpu:

Intel(R) Xeon(R) CPU           X5690  @ 3.47GHz

So does that means the issue was related to the case when there are
multiple nodes?

Regards,
Michael Wang

> 
> -Mike
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@...r.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
> 

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ