lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-id: <1239045758.7486.80.camel@localhost>
Date:	Mon, 06 Apr 2009 20:22:38 +0100
From:	"Ricardo M. Correia" <Ricardo.M.Correia@....COM>
To:	Andreas Dilger <adilger@....COM>
Cc:	Eric Sandeen <sandeen@...hat.com>, "Theodore Ts'o" <tytso@....edu>,
	linux-ext4@...r.kernel.org, Karel Zak <kzak@...hat.com>
Subject: Re: [PATCH e2fsprogs] Add ZFS detection to libblkid

Hi,

On Sáb, 2009-04-04 at 15:25 -0600, Andreas Dilger wrote:
> I _suppose_ there is no hard requirement that the ub_magic is present in
> the first überblock slot at 128kB, but that does make it harder to find.
> In theory we would need to add 256 magic value checks, which seems
> unreasonable.  Ricardo, do you know why the zfs.img.bz2 has bad überblocks
> for the first 4 slots?

Your supposition is correct - there's no requirement that the first
uberblock that gets written to the uberblock array has to be in the
first slot.

The reason that this image has bad uberblocks in the first 4 slots is
that, in the current ZFS implementation, when you create a ZFS pool, the
first uberblock that gets written to disk has txg number 4, and the slot
that gets chosen for each uberblock is "txg_nr % nr_of_uberblock_slots".

So in fact, it's not that the first 4 uberblocks are bad, it's just that
the first 4 slots don't have any uberblocks in them yet.

However, even though currently it's txg nr 4 that gets written first,
this is an implementation-specific detail that we cannot (or should not)
rely upon.

So I think you're (mostly) right - in theory, a correct implementation
would have to search all the uberblock slots in all the 4 labels (2 at
the beginning of the partition and 2 at the end), for a total of 512
magic offsets, but this is not easy to do with libblkid because it only
looks for the magic values at hard-coded offsets (as opposed to being
able to implement a routine to look for a filesystem, which could use a
simple "for" statement).

This is why I decided to change your patch to look for VDEV_BOOT_MAGIC,
which I assumed was always there in the same place, but apparently this
does not seem to be the case.

Eric, do you know how this ZFS pool/filesystem was created?
Specifically, which Solaris/OpenSolaris version/build, or maybe zfs-fuse
version? Also, details about which partitioning scheme is being used and
whether this is a root pool would also help a lot.

BTW, I also agree that it would be useful for ext3's mkfs to zero-out
the first and last 512 KB of the partition, to get rid of the ZFS labels
and magic values, although if it detects these magic values, it would be
quite useful for mkfs to refuse to format the partition, forcing the
user to specify some "--force" flag (like "zpool create" does), or at
least ask the user for confirmation (if mkfs is being used in
interactive mode), to avoid accidental data destruction.

If this is not done, then maybe leaving the ZFS labels intact could be
better, so that the user has a chance to recover (some/most) of it's
data, in case he made a mistake.

Cheers,
Ricardo


--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ