lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <46B748DE.1060108@interia.pl>
Date:	Mon, 06 Aug 2007 18:14:22 +0200
From:	Rafał Bilski <rafalbilski@...eria.pl>
To:	Dimitrios Apostolou <jimis@....net>
Cc:	linux-kernel@...r.kernel.org
Subject: Re: high system cpu load during intense disk i/o

> Hello and thanks for your reply. 
Hello again, 
> The cron job that is running every 10 min on my system is mpop (a 
> fetchmail-like program) and another running every 5 min is mrtg. Both 
> normally finish within 1-2 seconds. 
> 
> The fact that these simple cron jobs don't finish ever is certainly because of 
> the high system CPU load. If you see the two_discs_bad.txt which I attached 
> on my original message, you'll see that *vmlinux*, and specifically the 
> *scheduler*, take up most time. 
> 
> And the fact that this happens only when running two i/o processes but when 
> running only one everything is absolutely snappy (not at all slow, see 
> one_disc.txt), makes me sure that this is a kernel bug. I'd be happy to help 
> but I need some guidance to pinpoint the problem. 
In Your oprofile output I find "acpi_pm_read" particulary interesting. Unlike 
other VIA chipsets, which I know, Your doesn't use VLink to connect northbridge 
to southbridge. Instead PCI bus connects these two. As You probably know 
maximal PCI throughtput is 133MiB/s. In theory. In practice probably less.
ACPI registers are located on southbridge. This probably means that processor 
needs access to PCI bus in order to read ACPI timer register.
Now some math. 20GiB disk probably can send data at 20MiB/s rate. 200GiB 
disk probably about 40MiB/s. So 20+2*40=100MiB/s. I think that this could 
explain why simple inl() call takes so much time and why Your system isn't 
very responsive.
> Thanks, 
> Dimitris
Let me know if You find my theory amazing or amusing.
Rafał


----------------------------------------------------------------------
Kobiety klamia o wiele skuteczniej niz mezczyzni.
Sprawdz, jak sie na nich poznac

>>>http://link.interia.pl/f1b16

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ