lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20070408163131.GA6296@stusta.de>
Date:	Sun, 8 Apr 2007 18:31:31 +0200
From:	Adrian Bunk <bunk@...sta.de>
To:	Valdis.Kletnieks@...edu
Cc:	Krzysztof Halasa <khc@...waw.pl>, johnrobertbanks@...tmail.fm,
	Jan Harkes <jaharkes@...cmu.edu>, linux-kernel@...r.kernel.org,
	linux-fsdevel@...r.kernel.org
Subject: Re: Reiser4. BEST FILESYSTEM EVER.

On Sat, Apr 07, 2007 at 01:10:31PM -0400, Valdis.Kletnieks@...edu wrote:
> On Sat, 07 Apr 2007 16:11:46 +0200, Krzysztof Halasa said:
> 
> > > Think about it,... read speeds that are some FOUR times the physical
> > > disk read rate,... impossible without the use of compression (or
> > > something similar).
> > 
> > It's really impossible with compression only unless you're writing
> > only zeros or stuff alike. I don't know what bonnie uses for testing
> > but real life data doesn't compress 4 times. Two times, sometimes,
> 
> All depends on your data.  From a recent "compress the old logs" job on
> our syslog server:
> 
> /logs/lennier.cc.vt.edu/2007/03/maillog-2007-0308:       85.4% -- replaced with /logs/lennier.cc.vt.edu/2007/03/maillog-2007-0308.gz
> 
> And it wasn't a tiny file either - it's a busy mailserver, the logs run to
> several hundred megabytes a day.  Syslogs *often* compress 90% or more,
> meaning a 10X compression.
> 
> > but then it will be typically slower than disk access (I mean read,
> > as write will be much slower).
> 
> Actually, as far back as 1998 or so, I was able to document 20% *speedups*
> on an AIX system that supported compressed file systems - and that was from
> when a 133mz PowerPC 604e was a *fast* machine.   Since then, CPUs have gotten
> faster at a faster rate than disks have, even increasing the speedup.
> 
> The basic theory is that unless you're sitting close to 100%CPU, it is *faster*
> to burn some CPU to compress/decompress a 4K chunk of data down to 2K, and then
> move 2K to the disk drive, than it is to move 4K.  It's particularly noticable
> for larger files - if you can apply the compression to  remove the need to move
> 2M of data faster than you can move 2M of data, you win.

Counterpoints:
- not only CPUs have became faster, RAM has become faster, too
  a kernel tree after an allyesconfig build is at about 1 GB which is 
  less than half the size of RAM in my desktop computer
  if all disk accesses are asynchronous write accesses without any 
  pressure of being done quickly, compression can't improve performance
- today, much of the bigger data is already compressed data like mp3s
  or movies
- for cases like logfiles or databases, application specific compression
  should give best results

There might be special cases where compressed filesystems make sense, 
but my impression is that filesystem compresssion is not important and 
suited for current average systems.

cu
Adrian

-- 

       "Is there not promise of rain?" Ling Tan asked suddenly out
        of the darkness. There had been need of rain for many days.
       "Only a promise," Lao Er said.
                                       Pearl S. Buck - Dragon Seed

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ