lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <46273251.4000403@bull.net>
Date:	Thu, 19 Apr 2007 11:11:45 +0200
From:	Valerie Clement <valerie.clement@...l.net>
To:	Andrew Morton <akpm@...ux-foundation.org>
Cc:	linux-ext4@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: Performance degradation with FFSB between 2.6.20 and 2.6.21-rc7

Andrew Morton wrote:
> It could be due to I/O scheduler changes.  Which one are you using?  CFQ?
> 
> Or it could be that there has been some changed behaviour at the VFS/pagecache
> layer: the VFS might be submitting little hunks of lots of files, rather than
> large hunks of few files.
> 
> Or it could be a block-layer thing: perhaps some driver change has caused
> us to be placing less data into the queue.  Which device driver is that machine
> using?
> 
> Being a simple soul, the first thing I'll try when I get near a test box
> will be
> 
> for i in $(seq 1 16)
> do
> 	time dd if=/dev/zero of=$i bs=1M count=1024 &
> done
> 

I tried first the test with dd, the results are similar to those of FFSB 
tests, about 15 percent of degradation between 2.6.20.7 and 2.6.21-rc7.

I'm using the CFQ I/O scheduler. I changed it to the "deadline" one and 
I don't have any more the problem, I've got similar throughput values 
with 2.6.20.7 and 2.6.21-rc7 kernels.
So can we conclude that it's due to the CFQ scheduler?

I also checked the device driver used, the revision number is the same 
in 2.6.20 and 2.6.21.

   Valérie

-
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ