lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sat, 11 Dec 2010 21:16:18 -0600
From:	Jon Nelson <jnelson@...poni.net>
To:	"Ted Ts'o" <tytso@....edu>, Jon Nelson <jnelson@...poni.net>,
	Matt <jackdachef@...il.com>,
	Chris Mason <chris.mason@...cle.com>,
	Andi Kleen <andi@...stfloor.org>,
	Mike Snitzer <snitzer@...hat.com>,
	Milan Broz <mbroz@...hat.com>,
	linux-btrfs <linux-btrfs@...r.kernel.org>,
	dm-devel <dm-devel@...hat.com>,
	Linux Kernel <linux-kernel@...r.kernel.org>,
	htd <htd@...cy-poultry.org>, htejun <htejun@...il.com>,
	linux-ext4 <linux-ext4@...r.kernel.org>
Subject: Re: hunt for 2.6.37 dm-crypt+ext4 corruption? (was: Re: dm-crypt
 barrier support is effective)

On Sat, Dec 11, 2010 at 7:40 PM, Ted Ts'o <tytso@....edu> wrote:
> Yes, indeed.  Is this in the virtualized environment or on real
> hardware at this point?  And how many CPU's do you have configured in
> your virtualized environment, and how memory memory?  Is having a
> certain number of CPU's critical for reproducing the problem?  Is
> constricting the amount of memory important?

Originally, I observed the behavior on really real hardware.

Since then, I have been able to reproduce it in VirtualBox and
qemu-kvm, with openSUSE 11.3 and KUbuntu. All of the more recent tests
have been with qemu-kvm.

I have one CPU configured in the environment, 512MB of memory.
I have not done any memory-constriction tests whatsoever.

> It'll be a lot easier if I can reproduce it locally, which is why I'm
> asking all of these questions.

On Sat, Dec 11, 2010 at 8:34 PM, Ted Ts'o <tytso@....edu> wrote:
> One experiment --- can you try this with the file system mounted with
> data=writeback, and see if the problem reproduces in that journalling
> mode?

That test is running now, first with encryption. I will report if it
shows problems. If it does, I will wait until I have been able to see
that a few times, and move to a no-encryption test. Typically, I have
to run quite a few more iterations of that test before problems show
up (if they will at all).

> I want to rule out (if possible) journal_submit_inode_data_buffers()
> racing with mpage_da_submit_io().  I don't think that's the issue, but
> I'd prefer to do the experiment to make sure.  So if you can use a
> kernel and system configuration which triggers the problem, and then
> try changing the mount options to include data=writeback, and then
> rerun the test, and let me know if the problem still reproduces, I'd
> be really grateful.


-- 
Jon
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ