lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Date:	Sat, 5 Nov 2011 09:13:01 +0100
From:	Paweł Sikora <pluto@...k.net>
To:	linux-kernel@...r.kernel.org
Cc:	mingo@...e.hu, peterz@...radead.org, arekm@...-linux.org,
	nai.xia@...il.com
Subject: big [migration/X] cpu time.

Hi,

i'm testing a dual opteron machine (2x8 cores + 2x32GB ram) for 10 days so far
with quite heavy cpu processing and basic i/o usage (storing/accessing data
over nfs and local stripped raid). during tests i've recorded some interesting
thing related to migration/X kernel processes:

[root@hal ~]# uname -a                          
Linux hal 3.0.8-vs2.3.1-dirty #6 SMP Tue Oct 25 10:07:50 CEST 2011 x86_64 AMD_Opteron(tm)_Processor_6128 PLD Linux

[root@hal ~]# uptime                            
 08:19:40 up 10 days, 19:53,  4 users,  load average: 13.21, 13.11, 13.14

[root@hal ~]# ps aux|grep migra                       
root         6  0.0  0.0      0     0 ?        S    Oct25   0:00 [migration/0]
root         8 97.1  0.0      0     0 ?        S    Oct25 15151:59 [migration/1]
root        13 33.3  0.0      0     0 ?        S    Oct25 5202:15 [migration/2]
root        17 98.1  0.0      0     0 ?        S    Oct25 15309:01 [migration/3]
root        21 66.5  0.0      0     0 ?        S    Oct25 10370:14 [migration/4]
root        25 62.1  0.0      0     0 ?        S    Oct25 9698:11 [migration/5]
root        29 65.9  0.0      0     0 ?        S    Oct25 10283:22 [migration/6]
root        33 58.9  0.0      0     0 ?        S    Oct25 9190:28 [migration/7]
root        37  0.0  0.0      0     0 ?        S    Oct25   0:00 [migration/8]
root        41 91.9  0.0      0     0 ?        S    Oct25 14338:30 [migration/9]
root        45 27.5  0.0      0     0 ?        S    Oct25 4290:00 [migration/10]
root        49 64.6  0.0      0     0 ?        S    Oct25 10081:38 [migration/11]
root        53 98.9  0.0      0     0 ?        S    Oct25 15435:34 [migration/12]
root        57 65.8  0.0      0     0 ?        S    Oct25 10272:57 [migration/13]
root        61 65.6  0.0      0     0 ?        S    Oct25 10232:29 [migration/14]
root        65 66.7  0.0      0     0 ?        S    Oct25 10403:09 [migration/15]

[root@hal ~]# ps aux|grep irq
root         3  0.0  0.0      0     0 ?        S    Oct25   1:18 [ksoftirqd/0]
root        10  0.0  0.0      0     0 ?        S    Oct25   1:07 [ksoftirqd/1]
root        15  0.0  0.0      0     0 ?        S    Oct25   1:09 [ksoftirqd/2]
root        19  0.0  0.0      0     0 ?        S    Oct25   1:19 [ksoftirqd/3]
root        23  0.0  0.0      0     0 ?        S    Oct25   1:30 [ksoftirqd/4]
root        27  0.0  0.0      0     0 ?        S    Oct25   1:15 [ksoftirqd/5]
root        31  0.0  0.0      0     0 ?        S    Oct25   1:18 [ksoftirqd/6]
root        35  0.0  0.0      0     0 ?        S    Oct25   1:23 [ksoftirqd/7]
root        39  0.0  0.0      0     0 ?        S    Oct25   1:44 [ksoftirqd/8]
root        43  0.0  0.0      0     0 ?        S    Oct25   1:23 [ksoftirqd/9]
root        47  0.0  0.0      0     0 ?        S    Oct25   1:20 [ksoftirqd/10]
root        51  0.0  0.0      0     0 ?        S    Oct25   1:18 [ksoftirqd/11]
root        55  0.0  0.0      0     0 ?        S    Oct25   1:20 [ksoftirqd/12]
root        59  0.0  0.0      0     0 ?        S    Oct25   1:06 [ksoftirqd/13]
root        63  0.0  0.0      0     0 ?        S    Oct25   1:06 [ksoftirqd/14]
root        67  0.0  0.0      0     0 ?        S    Oct25   1:10 [ksoftirqd/15]
root       961  0.0  0.0      0     0 ?        S    Oct25   0:00 [irq/40-AMD-Vi]
root      2824  0.0  0.0   9628   436 ?        Ss   Oct25   2:51 irqbalance

as you can see 14 migration processes [1..7, 9..15] eat from 71 to 256 cpu-hours
and 2 migration processes [0, 8] sleep well. i'm not sure what causes such migration
during only 10 days of uptime. is it a bug in scheduler, kernel misconfiguration
or an ugly side effect of irqbalance daemon?

thanks in advance for any hints.

BR,
Paweł.


View attachment "proc.cpuinfo.txt" of type "text/plain" (14366 bytes)

View attachment "proc.interrupts.txt" of type "text/plain" (9699 bytes)

View attachment "proc.config.txt" of type "text/plain" (131044 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ