lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 02 Nov 2007 23:08:23 -0400
From:	Valdis.Kletnieks@...edu
To:	lsorense@...lub.uwaterloo.ca (Lennart Sorensen)
Cc:	Pavel Machek <pavel@....cz>, Rik van Riel <riel@...riel.com>,
	linux-kernel@...r.kernel.org
Subject: Re: WANTED: kernel projects for CS students

On Mon, 29 Oct 2007 17:44:52 EDT, Lennart Sorensen said:
> On Mon, Oct 29, 2007 at 09:19:11PM +0100, Pavel Machek wrote:

Sorry for the late reply, it's been a zoo of a week here... ;)

> If it doesn't it seems the compression feature is going to be rather
> unpredictable and my optimization would be perfectly within spec and
> make it pretty much useless.
> 
> > Yes. Typically for all zeros. It will be similar for
> > highly-compressible data (pictures, timetables, ....)

And most executables.  There's a reason why my vmlinux files are 11M and my
vmlinuz files are 2m. :)

> That would make it tricky to say if you should ever skip compression due
> to cpu load.  There is a chance cpu load would be better off by doing
> the compression.

IBM's AIX supported file system compression on the JFS filesystem years ago. I
was able to get up to 30% throughput increases by converting the /usr
filesystem to compressed - because even a 33mhz Power chipset could read in 5
512-byte blocks and decompress it to the original 4K faster than the disk could
read in 8 512-byte blocks.  Oh, and it worked for compression on r/w workloads
as well - that was one of the ways to get a RS6K model 250 (which was a
PowerPC601 chipset, a dead heat with a Mac 6600 (same chipset, same clock) to
handle a million e-mail msgs/day - even /var/spool/mqueue worked better.

Given that today there's an even *bigger* disparity in CPU speed versus disk
speed, I'd be surprised if it doesn't help today too.  As a first try, you
might consider compressing each 4K filesystem block in-place, and only write as
many sectors as the compressed takes (with the obvious fix for the pathological
"grows with compression" case of "just write it without").  Probably even
more wins can be found if you find a way to store the compressed chunks in a
way that minimizes seeks, but that's a filesystem design issue and probably
a too-large project (It's easy to do the stupid way - just store the whole
file as compressed - the tough part is doing it and not making lseek() *too*
painful.  Trying to figure out where in a .gz file byte 65536... ouch. ;)


Content of type "application/pgp-signature" skipped

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ