lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 16 Aug 2012 18:26:29 -0400
From:	Theodore Ts'o <tytso@....edu>
To:	Maciej Żenczykowski <maze@...gle.com>
Cc:	Fengguang Wu <fengguang.wu@...el.com>,
	Marti Raudsepp <marti@...fo.org>,
	Kernel hackers <linux-kernel@...r.kernel.org>,
	ext4 hackers <linux-ext4@...r.kernel.org>
Subject: Re: NULL pointer dereference in ext4_ext_remove_space on 3.5.1

On Thu, Aug 16, 2012 at 02:40:53PM -0700, Maciej Żenczykowski wrote:
> 
> This happened twice to me while moving data off of a ~1TB ext4 partition.
> The data portion was on a stripe raid across 2 ~500GB drives, the
> journal was on a relatively large partition (500MB?) on an SSD.
> (crypto and lvm were also involved).
> ...
> Perhaps just untarring a bunch of kernels onto an empty partition,
> filling it up, then deleting those kernels should be sufficient to
> repro this (untried).

Thanks, that's really helpful.   I can say that using a 4MB journal and
running fsstress is _not_ enough to trigger the problem.

Looking more closely at what might be needed to trigger the bug, 'i'
gets left uninitialized when err is set to -EAGAIN, and that happens
when ext4_ext_truncate_extend_restart() is unable to extend the
journal transaction.  But that also means we need to be deleting a
sufficiently large enough file that the blocks span multiple block
groups (which is why we need to extend the transaction, so we can
modify more bitmap blocks) at the point when there is no more room in
the journal, so we have to close the current transaction, and then
retry it again with a new journal handle in a new transaction.

So that implies that untaring a bunch of kernels probably won't be
sufficient, since the files will be too small.  What we probably will
need to do is to fill a large file system with lots of large files,
use a small journal, and then try to do an rm -rf.

          	    	     	      	     - Ted
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ