[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <46B747C7.9040409@gmx.net>
Date: Mon, 06 Aug 2007 18:09:43 +0200
From: Dimitrios Apostolou <jimis@....net>
To: Andrew Morton <akpm@...ux-foundation.org>
CC: linux-kernel@...r.kernel.org
Subject: Re: high system cpu load during intense disk i/o
Andrew Morton wrote:
> On Sun, 5 Aug 2007 19:03:12 +0300 Dimitrios Apostolou <jimis@....net> wrote:
>
>> was my report so complicated?
>
> We're bad.
>
> Seems that your context switch rate when running two instances of
> badblocks against two different disks went batshit insane. It doesn't
> happen here.
Hello again,
I run some more tests and figured out that the problem occurs only when
the I/O is writing to disk. Indeed, when I run two badblocks without the
-w switch, read-only that is, the oprofile output seems normal
(two_discs_read.txt). So does the vmstat output:
procs -----------memory---------- ---swap-- -----io---- -system--
----cpu----
r b swpd free buff cache si so bi bo in cs us sy
id wa
4 2 0 24136 87124 95660 0 0 28288 0 449 724 92 8
0 0
4 2 0 24076 87136 95648 0 0 28160 12 446 749 91 9
0 0
4 2 0 24016 87144 95664 0 0 28096 88 444 790 89 11
0 0
4 2 0 24016 87144 95664 0 0 28288 0 444 705 88 12
0 0
4 2 0 24016 87144 95660 0 0 28288 0 448 737 95 5
0 0
As you can see the context switching rate is greater now but the system
CPU load much less, than that of two_discs_bad.txt.
However the cron jobs still seem to have a hard time finishing, even
though they seem now to consume about 90% CPU time. Could someone please
explain me some things that seem vital to understanding the situation?
Firstly, what is that "processor" line in the oprofile output without
symbols? And why does *it* take all the CPU and not other important
processes? Finally what do the kernel symbols "__switch_to" and
"schedule" represent?
Thanks in advance,
Dimitris
View attachment "two_discs_read.txt" of type "text/plain" (12261 bytes)
Powered by blists - more mailing lists