lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Fri, 07 Dec 2007 17:29:44 +0100 From: Holger Wolf <wolf@...ux.vnet.ibm.com> To: Arjan van de Ven <arjan@...radead.org> CC: Holger.Wolf@...ibm.com, mingo@...e.hu, schwidefsky@...ibm.com, linux-kernel@...r.kernel.org Subject: Re: Scheduler behaviour Arjan van de Ven wrote: > On Wed, 05 Dec 2007 21:15:30 +0100 > Holger Wolf <wolf@...ux.vnet.ibm.com> wrote: > > >> We discovered performance degradation with dbench when using kernel >> 2.6.23 compared to kernel 2.6.22. >> >> In our case we booted a Linux in a IBM System z9 LPAR with 256MB of >> ram with 4 CPU's. This system uses a striped LV with 16 disks on a >> Storage Server connected via 8 4GBit links. >> A dbench was started on that system performing I/O operations on the >> striped LV. dbench runs were performed with 1 to 62 processes. >> Measurements with a 2.6.22 kernel were compared to measurements with >> a 2.6.23 kernel. We saw a throughput degradation from 7.2 to 23.4 >> > > this is good news! > dbench rewards unfair behavior... so higher dbench usually means a > worse kernel ;) > > > tests with 2.6.22 including CFS show the same results. This means the pressure on page cache is much higher when all processes run in parallel. We see this behavior as well with iozone when writing on many disks with many threads and just 256 MB memory. This means the scheduler schedules as it should - fair. regards Holger -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists