lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Tue, 14 Jul 2009 11:51:18 -0400 From: Theodore Tso <tytso@....edu> To: Adrian Hunter <adrian.hunter@...ia.com> Cc: Andrew.Morton.akpm@...ux-foundation.org, Andreas.Dilger.adilger@....com, Stephen.Tweedie.sct@...hat.com, Artem Bityutskiy <artem.bityutskiy@...ia.com>, linux-ext4@...r.kernel.org Subject: Re: [PATCH 0/2] ext3 HACKs On Tue, Jul 14, 2009 at 05:02:53PM +0300, Adrian Hunter wrote: > Hi > > We are using linux 2.6.28 and we have a situation where ext3 > can take 30-60 seconds to mount. > > The cause is the underlying device has extremely poor random > write speed (several orders of magnitude slower than sequential > write speed), and journal recovery can involve many small random > writes. > > To alleviate this situation somewhat, I have two moderately ugly > hacks: > HACK 1: ext3: mount fast even when recovering > HACK 2: do I/O read requests while ext3 journal recovers > > HACK 1 uses a I/O barrier in place of waiting for recovery I/O to be > flushed. > > HACK 2 crudely throws I/O read requests to the front of the dispatch > queue until the I/O barrier from HACK 1 is reached. Have you actually benchmarked these patches, ideally with a fixed filesystem image so the two runs are done requiring exactly the same number of blocks to recover? We implement ordered I/O in terms of doing a flush, so it would be surprising to see that a significant difference in times. Also, it would be useful to do a blktrace before and after your patches, again with a fixed filesystem image so the experiment can be carefully controlled. Regards, - Ted -- To unsubscribe from this list: send the line "unsubscribe linux-ext4" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists