lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <200708071615.44452.jimis@gmx.net>
Date:	Tue, 7 Aug 2007 16:15:43 +0300
From:	Dimitrios Apostolou <jimis@....net>
To:	Alan Cox <alan@...rguk.ukuu.org.uk>
Cc:	RafaƂ Bilski <rafalbilski@...eria.pl>,
	linux-kernel@...r.kernel.org
Subject: Re: high system cpu load during intense disk i/o

On Tuesday 07 August 2007 03:37:08 Alan Cox wrote:
> > > acpi_pm_read is capable of disappearing into  SMM traps which will make
> > > it look very slow.
> >
> > what is an SMM trap? I googled a bit but didn't get it...
>
> One of the less documented bits of the PC architecture. It is possible to
> arrange that the CPU jumps into a special mode when triggered by some
> specific external event. Originally this was used for stuff like APM and
> power management but some laptops use it for stuff like faking the
> keyboard interface and the Geode uses it for tons of stuff.
>
> As SMM mode is basically invisible to the OS what oprofile and friends
> see isn't what really occurs. So you see
>
> 	pci write -> some address
>
> you don't then see
>
> 	SMM
> 	CPU saves processor state
> 	Lots of code runs (eg i2c polling the battery)
> 	code executes RSM
>
> 	Back to the OS
>
> and the next visible profile point. This can make an I/O operation look
> really slow even if it isn't the I/O which is slow.

I always thought x86 is becoming a really dirty architecture. I now think it 
is even uglier. :-p Thank you for the thorough explanation. 

>
> > the reason I'm talking about a "software driver limit" is because I am
> > sure about some facts:
> > - The disks can reach very high speeds (60 MB/s on other systems with
> > udma5)
>
> Is UDMA5 being selected firstly ?

What the kernel selects by default is udma4 (66MB/s). I tried forcing udma5 
(100MB/s) with hdparm even though I think my chipset doesn't support it, and 
indeed there was a difference! After repetitive tests udma4 gives 20MB/s, 
udma5 gives 22MB/s. I'm mostly surprised however that I could even set this 
option.

>
> > So what is left? Probably only the corresponding kernel module.
>
> Unlikely to be the disk driver as that really hasn't changed tuning for a
> very long time. I/O scheduler interactions are however very possible.

I'm now trying to use the new libata driver and see what happens... 


Thanks, 
Dimitris
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ