lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Mon, 08 Sep 2008 08:36:26 -0600
From:	Robert Hancock <hancockr@...w.ca>
To:	Ulrich Windl <ulrich.windl@...uni-regensburg.de>
CC:	linux-kernel@...r.kernel.org
Subject: Re: Q: (2.6.16 & ext3) bad SMP load balancing when writing to ext3
 on slow device

Ulrich Windl wrote:
> On 6 Sep 2008 at 12:15, Robert Hancock wrote:
> 
>> Ulrich Windl wrote:
>>> Hi,
>>>
>>> while copying large remote files for an USB memory stick formatted with ext3 using 
>>> scp, I noticed a stall in wrie speed. Looking at the system with top I saw:
>>> top - 09:25:25 up 55 days, 23:49,  2 users,  load average: 11.09, 7.41, 4.43
>>> Tasks: 128 total,   1 running, 127 sleeping,   0 stopped,   0 zombie
>>> Cpu0  :  7.6%us,  0.3%sy,  0.0%ni,  0.0%id, 90.4%wa,  0.3%hi,  1.3%si,  0.0%st
>>> Cpu1  :  0.0%us,  0.0%sy,  0.0%ni,  0.0%id,100.0%wa,  0.0%hi,  0.0%si,  0.0%st
>>> Cpu2  :  0.0%us,  0.0%sy,  0.0%ni,  0.0%id,100.0%wa,  0.0%hi,  0.0%si,  0.0%st
>>> Cpu3  :  0.0%us,  1.7%sy,  0.0%ni,  0.0%id, 98.3%wa,  0.0%hi,  0.0%si,  0.0%st
>>> Mem:   1028044k total,  1017956k used,    10088k free,    34784k buffers
>>> Swap:  2097140k total,      616k used,  2096524k free,   733100k cached
>>>
>>>   PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
>>> 11284 root      18   0 29168 1960 1504 D    2  0.2   0:11.81 scp
>>>   137 root      15   0     0    0    0 D    0  0.0  14:16.59 pdflush
>>> 10865 root      15   0     0    0    0 D    0  0.0   0:00.50 kjournald
>>> 11355 root      15   0     0    0    0 D    0  0.0   0:00.09 pdflush
>>> 11396 root      15   0     0    0    0 D    0  0.0   0:00.12 pdflush
>>> 11397 root      15   0     0    0    0 D    0  0.0   0:00.06 pdflush
>>> 12007 root      15   0     0    0    0 D    0  0.0   0:00.02 pdflush
>>> 12070 root      16   0 23976 2376 1744 R    0  0.2   0:00.28 top
>>> 12294 root      15   0     0    0    0 D    0  0.0   0:00.00 pdflush
>>> 12295 root      15   0     0    0    0 D    0  0.0   0:00.02 pdflush
>>> 12296 root      15   0     0    0    0 D    0  0.0   0:00.02 pdflush
>>> 27490 root      10  -5     0    0    0 D    0  0.0   0:02.93 usb-storage
>>>
>>> First, it's impressive that a singly copy job can raise the load to above 10, and 
>>> the next thing is that writing to a slow device can make 4 CPUs (actually two with 
>>> hyperthreading) busy. The pdflush daemons are expected to bring dirty blocks onto 
>>> the device, I guess. Does it make any sense to make four CPUs busy with doing so?
>> They're not busy. IO wait means they have nothing to do other than wait 
>> for IO to complete. It's a bit surprising that you get so many pdflush 
>> threads started up, however..
> 
> Robert,
> 
> back to the question: Assuming the I/O is limited by the controller, communication 
> channel and device, does it ever make any sense to start additional I/O daemons 
> for a device that is already handled by a daemon and doesn't have an alternate 
> communication channel (to make more dirty block go onto the device)? (Assuming no 
> daemon servers more than one device).

I suspect this behavior may have already been changed, you may want to 
try a newer kernel and see..
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ