lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <46B7BF67.8010506@gmx.net>
Date:	Tue, 07 Aug 2007 02:40:07 +0200
From:	Dimitrios Apostolou <jimis@....net>
To:	Alan Cox <alan@...rguk.ukuu.org.uk>
CC:	RafaƂ Bilski <rafalbilski@...eria.pl>,
	linux-kernel@...r.kernel.org
Subject: Re: high system cpu load during intense disk i/o

Hi Alan,

Alan Cox wrote:
>>> In Your oprofile output I find "acpi_pm_read" particulary interesting. 
>>> Unlike other VIA chipsets, which I know, Your doesn't use VLink to 
>>> connect northbridge to southbridge. Instead PCI bus connects these two. 
>>> As You probably know maximal PCI throughtput is 133MiB/s. In theory. In 
>>> practice probably less.
> 
> acpi_pm_read is capable of disappearing into  SMM traps which will make
> it look very slow.

what is an SMM trap? I googled a bit but didn't get it...

> 
>> about 15MB/s for both disks. When reading I get about 30MB/s again from 
>> both disks. The other disk, the small one, is mostly idle, except for 
>> writing little bits and bytes now and then. Since the problem occurs 
>> when writing, 15MB/s is just too little I think for the PCI bus.
> 
> Its about right for some of the older VIA chipsets but if you are seeing
> speed loss then we need to know precisely which kernels the speed dropped
> at. Could be there is an I/O scheduling issue your system shows up or
> some kind of PCI bus contention when both disks are active at once.

I am sure throughput kept diminishing little by little with many 2.6 
releases, and that it wasn't a major regression on a specific version. 
Unfortunately I cannot backup my words with measurements from older 
kernels right now, since the system is hard to boot with such (new udev, 
new glibc). However I promise I'll test in the future (probably using 
old liveCDs) and come back then with proof.

> 
>> I have been ignoring these performance regressions because of no 
>> stability problems until now. So could it be that I'm reaching the 
>> 20MB/s driver limit and some requests take too long to be served?
> 
> Nope.

the reason I'm talking about a "software driver limit" is because I am 
sure about some facts:
- The disks can reach very high speeds (60 MB/s on other systems with udma5)
- The chipset on this specific motherboard can reach much higher 
numbers, as was measured with old kernels.
- No cable problems (have been changed), no strange dmesg output.

So what is left? Probably only the corresponding kernel module.


Thanks,
Dimitris

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ