lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 21 Nov 2011 11:56:26 -0500
From:	Ted Ts'o <tytso@....edu>
To:	Hugh Dickins <hughd@...gle.com>
Cc:	Allison Henderson <achender@...ux.vnet.ibm.com>,
	Curt Wohlgemuth <curtw@...gle.com>, linux-ext4@...r.kernel.org,
	linux-kernel@...r.kernel.org
Subject: Re: Bug with "fix partial page writes"

On Sun, Nov 20, 2011 at 12:59:10PM -0800, Hugh Dickins wrote:
> We've seen no response to this, so Cc'ing Ted and linux-kernel,
> and I'll fill in some more detail below.

Hugh,

Thanks for reminding us about this.  Unfortunately bugzilla is still
down, so we'll have to track this via e-mail.

I mentioned this issue on the weekly ext4 call, and though there will
be a delay due to the Thanksgiving break, Allison said she would try
to take a look at this.  Hopefully other folks will as well.

> I did not reproduce either problem above with that.  Instead I found
> that backing out 02fac1297eb3 made fsx on 3.2-rc1 fail in a few minutes.
> But leaving 02fac1297eb3 in, fsx still failed in 20 minutes or an hour.
> On 3.1, fsx failed in a few minutes.  On 3.0, fsx failed in half an hour.
> On 2.6.39, fsx failed in a few minutes.  I had to go back to 2.6.38 for
> fsx to run successfully under memory pressure for more than two hours.
> 
> It looks as if ext4 testing has not been running fsx under memory
> pressure recently.  And although I didn't reproduce my main problems
> that way, it could well be that getting fsx to run reliably again
> under memory pressure will be the way to fix those problems.

Yes, I think we've been relying mostly on xfstests, and not
necessarily under extreme memory pressures.  Out of curiosity, what
sort of configuration were you using when you did the above tests?
(memory, swap, fs bock size, etc.)  Was it the same as you did with
your make -j20 kernel stress test?  And where you using any special
fsx options?

I agree that we should add better memory pressure testing to ext4.

Regards,

						- Ted
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ