[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <4F75E46E.2000503@msgid.tls.msk.ru>
Date: Fri, 30 Mar 2012 20:50:54 +0400
From: Michael Tokarev <mjt@....msk.ru>
To: Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: dramatic I/O slowdown after upgrading 2.6.32->3.0
Hello.
I'm observing a dramatic slowdown of several hosts after upgrading
from 2.6.32.y to 3.0.x i686 kernels (in both cases from kernel.org,
on both cases the last version is relatively latest).
On 2.6.32 everything is fast. On 3.0 the same operations which goes
instantly takes ages to complete.
For example, out of observed actual differences, munin-graph process
on 2.6.32 completes in a few secs writing to a ext4 /var filesystem.
On 3.0, the same process takes about a minute and keeps all 5 hard
drives (md raid5) 99% busy all this time.
apt-get upgrade (from debian/ubuntu) first reads current package
status database. This process takes about 3 secs on a freshly
booted 2.6.32, and about 40 seconds on a freshly booted 3.0,
again, keeping all 5 hdds 99% busy (according to iostat).
Only the kernel is different, all the rest is exactly the same.
I can reboot into 2.6.32 again after running 3.0, and the system
is fast again.
The machine is relatively old, it is an IBM xSeries 345 server
with some 2.66GHz Xeon (stepping 9) CPU, a Broadcom chipset, an
LSI Logic / Symbios Logic 53c1030 PCI-X Fusion-MPT Dual Ultra320
SCSI controller and 5x74Gb pSCSI drives. But it is obviously not
a reason for it to run _this_ slow... ;)
There's another machine here, with an AMD BE-2400 CPU, nVidia MCP55
chipset, AHA-3940U2x pSCSI controller and a set of 74Gb HDDs. It
shows similar sympthoms after upgrading from 2.6.32 to 3.0 -- every
I/O becomes very slow with all HDDs being busy for long periods.
What's the way to debug this issue?
Thank you!
/mjt
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists