lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <13684.1175965831@turing-police.cc.vt.edu>
Date:	Sat, 07 Apr 2007 13:10:31 -0400
From:	Valdis.Kletnieks@...edu
To:	Krzysztof Halasa <khc@...waw.pl>
Cc:	johnrobertbanks@...tmail.fm, Jan Harkes <jaharkes@...cmu.edu>,
	linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org
Subject: Re: Reiser4. BEST FILESYSTEM EVER.

On Sat, 07 Apr 2007 16:11:46 +0200, Krzysztof Halasa said:

> > Think about it,... read speeds that are some FOUR times the physical
> > disk read rate,... impossible without the use of compression (or
> > something similar).
> 
> It's really impossible with compression only unless you're writing
> only zeros or stuff alike. I don't know what bonnie uses for testing
> but real life data doesn't compress 4 times. Two times, sometimes,

All depends on your data.  From a recent "compress the old logs" job on
our syslog server:

/logs/lennier.cc.vt.edu/2007/03/maillog-2007-0308:       85.4% -- replaced with /logs/lennier.cc.vt.edu/2007/03/maillog-2007-0308.gz

And it wasn't a tiny file either - it's a busy mailserver, the logs run to
several hundred megabytes a day.  Syslogs *often* compress 90% or more,
meaning a 10X compression.

> but then it will be typically slower than disk access (I mean read,
> as write will be much slower).

Actually, as far back as 1998 or so, I was able to document 20% *speedups*
on an AIX system that supported compressed file systems - and that was from
when a 133mz PowerPC 604e was a *fast* machine.   Since then, CPUs have gotten
faster at a faster rate than disks have, even increasing the speedup.

The basic theory is that unless you're sitting close to 100%CPU, it is *faster*
to burn some CPU to compress/decompress a 4K chunk of data down to 2K, and then
move 2K to the disk drive, than it is to move 4K.  It's particularly noticable
for larger files - if you can apply the compression to  remove the need to move
2M of data faster than you can move 2M of data, you win.


Content of type "application/pgp-signature" skipped

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ