lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 20 Aug 2008 16:29:59 +0300
From:	Sami Liedes <sliedes@...hut.fi>
To:	Jan Kara <jack@...e.cz>
Cc:	Andreas Dilger <adilger@....com>,
	"Aneesh Kumar K.V" <aneesh.kumar@...ux.vnet.ibm.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	bugme-daemon@...zilla.kernel.org, linux-ext4@...r.kernel.org
Subject: Re: [Bugme-new] [Bug 11266] New: unable to handle kernel paging
	request in ext2_free_blocks

On Wed, Aug 20, 2008 at 12:25:33PM +0200, Jan Kara wrote:
>   OK, thanks. Then we must somehow corrupt group descriptor block during
> the operation. Because I'm pretty sure it *is* corrupted - the oops
> is: unable to handle kernel paging request at c7e95ffc. If we look into
> registers, we see ECX has c7e96000 (which is probably bh->b_data). In
> the second oops it's exactly the same - ECX has c11e4000, the oops is at
> address c11e3ffc. So in both cases it is ECX-4. So somehow we managed to
> pass negative offset into ext2_test_bit(). But as Andreas pointed out,
> when we load descriptors into memory, we check that both bitmaps and
> inode table is in ext2_check_descriptors()... The other possibility
> would be that we managed to corrupts s_first_data_block in the
> superblock. Anyway, both possibilities don't look very likely. I'll try
> to reproduce the problem and maybe get more insight... How large is your
> filesystem BTW?

My FS is 10 MiB and tries to be diverse in its contents. It has a copy
of my /dev and a small partial copy of /usr/share/doc.

I put the pristine (non-corrupted) filesystem at

   http://www.hut.fi/~sliedes/fsdebug-hdc-ext2.bz2

(520k compressed).

I've been thinking I should write a script to prepare the root
filesystem for the tests, but haven't got that far yet. Basically
(unless I forget some step) I use debootstrap to bootstrap a minimal
Debian system, create some needed devices in it (hd[abc], ttyS0 at
least), set the hostname to fstest, configure getty to listen to
ttyS0, copy the script to /root/runtest (the script's first parameter
is the seed) and install some Debian packages (zzuf and timeout at
least).

Then I make four copies of the images and run four qemus in parallel
since I have four cpus, modifying the first parameter (initial seed)
of the runtest script, e.g. 0, 10M, 20M, 30M.

I guess the approach might be useful for those who write the code too
(or people closer to them than me), since I've already found a fair
number of bugs with it in a fairly short period of time (#10871,
#10882, #10976, #11250, #11253, #11266 for ext[23] bugs, also one ext4
bug I hit when an ext3 fs was detected as ext4; search bugzilla for my
email to see the rest of the bugs).

The current root filesystem is 144M compressed (yeah, there's a lot of
stuff irrelevant to the tests there), I could upload it somewhere if
that helps. After that running the tests is a matter of running
something like

   qemu -kernel bzImage -append 'root=/dev/hda console=ttyS0,115200n8' \
       -hda hda -hdb hdb -hdc hdc -nographic -serial pty

, attaching a screen session to the allocated pty, logging in as root
and running ./runtest $seed.

Also the tests are not as comprehensive as I'd like. As an example,
some years ago I stress tested reiser4 (it was already "ready") with
pretty mundane operations (without corrupting the fs) and it worked,
but I've got it to break badly at three separate times in separate
ways just by normally using Debian's aptitude - the breakage was in
flock(), and the current tests don't test flock()). Other things to
test would be at least hard links and fifos...

The level of automation isn't quite what I'd like either, optimally
there would just be a single script that takes the kernel image,
filesystem type and number of parallel instances as arguments and runs
the tests.

	Sami
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ