lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 15 Feb 2010 10:02:13 -0600
From:	"Chris Friesen" <cfriesen@...tel.com>
To:	balbir@...ux.vnet.ibm.com
CC:	KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
	Rik van Riel <riel@...hat.com>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	linux-mm@...ck.org
Subject: Re: tracking memory usage/leak in "inactive" field in /proc/meminfo?

On 02/13/2010 12:29 AM, Balbir Singh wrote:

> OK, I did not find the OOM kill output, dmesg. Is the OOM killer doing
> the right thing? If it kills the process we suspect is leaking memory,
> then it is working correctly :) If the leak is in kernel space, we
> need to examine the changes more closely.

I didn't include the oom killer message because it didn't seem important
in this case.  The oom killer took out the process with by far the
largest memory consumption, but as far as I know that process was not
the source of the leak.

It appears that the leak is in kernel space, given the unexplained pages
that are part of the active/inactive list but not in
buffers/cache/anon/swapcached.

> kernel modifications that we are unaware of make the problem harder to
> debug, since we have no way of knowing if they are the source of the
> problem.

Yes, I realize this.  I'm not expecting miracles, just hoping for some
guidance.


>> Committed_AS	12666508	12745200	7700484
> 
> Comitted_AS shows a large change, does the process that gets killed
> use a lot of virtual memory (total_vm)? Please see my first question
> as well. Can you try to set
> 
> vm.overcommit_memory=2
> 
> and run the tests to see if you still get OOM killed.

As mentioned above, the process that was killed did indeed consume a lot
of memory.  I could try running with strict memory accounting, but would
you agree that that given the gradual but constant increase in the
unexplained pages described above, currently that looks like a more
likely culprit?

Chris
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ