lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 28 Feb 2012 11:34:24 -0600
From:	Eric Sandeen <sandeen@...hat.com>
To:	ext4 development <linux-ext4@...r.kernel.org>,
	Lukáš Czerner 
	<lczerner@...hat.com>
Subject: mkfs.ext4 vs. e2fsck discard oddities

I've been testing Lukas' last 2 patches for e2fsck discard, and noticed something a little odd.

If I make a 512M file, loopback mount it, and mkfs.ext4 it with discard, it uses about 17M at that point.
If I then run fsstress on it with a known seed, then run e2fsck -E discard on it, it uses about 52M.

If I repeat the above test telling mkfs.ext4 NOT to discard, I'm left with about 94M after the discarding e2fsck.

So it seems that perhaps e2fsck is not discarding everything that it could; after a discarding fsck, we should be left with the same (minimal) nr. of blocks "in use" no?

I guess that's better than discarding _more_ than it should though.  ;)

(I suppose it is possible that this is the underlying filesytem being selective about which discards it accepts, but it behaves the same way on ext4 and xfs backing filesystems)

-Eric

FWIW, sequence of events here, tested with and without "-K" on mkfs.ext4:

dd if=/dev/zero of=fsfile bs=1M count=512
losetup /dev/loop0 fsfile
mkfs.ext4 -F /dev/loop0&>/dev/null
mount /dev/loop0 mnt/
/root/git/xfstests/ltp/fsstress -s 1 -d mnt/ -n 2000 -p 4
umount mnt/
e2fsck/e2fsck.static -fy -E discard /dev/loop0> fsck1.out || exit
du -hc fsfile
losetup -d /dev/loop0

--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ