lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20091103115505.3d6e096d@smartjog.com>
Date:	Tue, 3 Nov 2009 11:55:05 +0100
From:	Laurent CORBES <laurent.corbes@...rtjog.com>
To:	"NeilBrown" <neilb@...e.de>
Cc:	"device-mapper development" <dm-devel@...hat.com>,
	akpm@...ux-foundation.org, linux-fsdevel@...r.kernel.org,
	linux-raid@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [dm-devel] Re: Ext3 sequential read performance drop 2.6.29 ->
  2.6.30,2.6.31,...

Hi all,

> >> > Hi all,
> >> >
> >> > While benchmarking some systems I discover a big sequential read
> >> performance
> >> > drop using ext3 on ~ big files. The drop seems to be introduced in
> >> 2.6.30. I'm
> >> > testing with 2.6.28.6 -> 2.6.29.6 -> 2.6.30.4 -> 2.6.31.3.
> >>
> >> Seems that large performance regressions aren't of interest to this
> >> list :(

Or +200MB/s is enough for a lot of people :)

> > No sure which list you mean, but dm-devel is for dm, not md.  We're also
> > seeing similarly massive performance drops with md and ext3/xfs as
> > already reported on the list.  Someone tracked it down to writeback
> > changes as usual, but there it got stuck.
> 
> I'm still looking - running some basic tests on 4 filesystems over
> half a dozen recent kernels to see what has been happening.
> 
> I have a suspicion that there a multiple problems.
> In particular, XFS has a strange degradation which was papered over
> by commit c8a4051c3731b.
> I'm beginning to wonder if it was caused by commit 17bc6c30cf6bf
> but I haven't actually tested that yet.

What is really strange is that from all the tests I did the raw md perfs never
dropped. only a few MB of diff between kernel (~2%). This is maybe related to
the way upper FS write datas on the md layer.

I'll make the tests on raw disks to see if there is some troubles here also. I
can also test with other raid layers. Is there any tuning/debug I
can make for you ? I can also setup a remote access to this system if needed.

Thanks.
-- 
Laurent Corbes - laurent.corbes@...rtjog.com
SmartJog SAS | Phone: +33 1 5868 6225 | Fax: +33 1 5868 6255 | www.smartjog.com
27 Blvd Hippolyte Marquès, 94200 Ivry-sur-Seine, France
A TDF Group company
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ