lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 27 Apr 2011 00:46:51 -0700 (PDT)
From:	Christian Kujau <lists@...dbynature.de>
To:	Dave Chinner <david@...morbit.com>
cc:	LKML <linux-kernel@...r.kernel.org>, xfs@....sgi.com
Subject: Re: 2.6.39-rc4+: oom-killer busy killing tasks

On Wed, 27 Apr 2011 at 12:26, Dave Chinner wrote:
> What this shows is that VFS inode cache memory usage increases until
> about the 550 sample mark before the VM starts to reclaim it with
> extreme prejudice. At that point, I'd expect the XFS inode cache to
> then shrink, and it doesn't. I've got no idea why the either the

Do you remember any XFS changes past 2.6.38 that could be related to 
something like this?

Bisecting is pretty slow on this machine. Could I somehow try to run 
2.6.39-rc4 but w/o the XFS changes merged after 2.6.38? (Does someone know 
how to do this via git?)

> Can you check if there are any blocked tasks nearing OOM (i.e. "echo
> w > /proc/sysrq-trigger") so we can see if XFS inode reclaim is
> stuck somewhere?

Will do, tomorrow.

Should I open a regression bug, so we don't loose track of this thing?

Thanks,
Christian.
-- 
BOFH excuse #425:

stop bit received
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ