lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 11 Jul 2016 09:44:54 -0400
From:	Mike Snitzer <snitzer@...hat.com>
To:	Matthias Dahl <ml_linux-kernel@...ary-island.eu>
Cc:	linux-mm@...ck.org, dm-devel@...hat.com,
	linux-kernel@...r.kernel.org
Subject: Re: [4.7.0rc6] Page Allocation Failures with dm-crypt

On Mon, Jul 11 2016 at  9:27am -0400,
Matthias Dahl <ml_linux-kernel@...ary-island.eu> wrote:

> Hello Mike...
> 
> On 2016-07-11 15:18, Mike Snitzer wrote:
> 
> >Something must explain the execessive nature of your leak but
> >it isn't a known issue.
> 
> Since I am currently setting up the new machine, all tests were
> performed w/ various live cd images (Fedora Rawhide, Gentoo, ...)
> and I saw the exact same behavior everywhere.
> 
> >Have you tried running with kmemleak enabled?
> 
> I would have to check if that is enabled on the live images but even if
> it is, how would that work? The default interval is 10min. If I fire up
> a dd, the memory is full within two seconds or so... and after that, the
> OOM killer kicks in and all hell breaks loose unfortunately.

You can control when kmemleak scans.  See Documentation/kmemleak.txt

You could manually trigger a scan just after the dd is started.

But I doubt the livecds have kmemleak compiled into their kernels.

> I don't think this is a particular unique issue on my side. You could,
> if I am right, easily try a Fedora Rawhide image and reproduce it there
> yourself. The only unique point here is my RAID10 which is a Intel Rapid
> Storage s/w RAID. I have no clue if this could indeed cause such a "bug"
> and how.

What is your raid10's full stripesize?  Is your dd IO size of 512K
somehow triggering excess R-M-W cycles which is exacerbating the
problem?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ