[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20070806204853.6a693c4b@the-village.bc.nu>
Date: Mon, 6 Aug 2007 20:48:53 +0100
From: Alan Cox <alan@...rguk.ukuu.org.uk>
To: Dimitrios Apostolou <jimis@....net>
Cc: RafaĆ Bilski <rafalbilski@...eria.pl>,
linux-kernel@...r.kernel.org
Subject: Re: high system cpu load during intense disk i/o
> > In Your oprofile output I find "acpi_pm_read" particulary interesting.
> > Unlike other VIA chipsets, which I know, Your doesn't use VLink to
> > connect northbridge to southbridge. Instead PCI bus connects these two.
> > As You probably know maximal PCI throughtput is 133MiB/s. In theory. In
> > practice probably less.
acpi_pm_read is capable of disappearing into SMM traps which will make
it look very slow.
> about 15MB/s for both disks. When reading I get about 30MB/s again from
> both disks. The other disk, the small one, is mostly idle, except for
> writing little bits and bytes now and then. Since the problem occurs
> when writing, 15MB/s is just too little I think for the PCI bus.
Its about right for some of the older VIA chipsets but if you are seeing
speed loss then we need to know precisely which kernels the speed dropped
at. Could be there is an I/O scheduling issue your system shows up or
some kind of PCI bus contention when both disks are active at once.
> I have been ignoring these performance regressions because of no
> stability problems until now. So could it be that I'm reaching the
> 20MB/s driver limit and some requests take too long to be served?
Nope.
Alan
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists