lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110502121958.GA2978@dastard>
Date:	Mon, 2 May 2011 22:19:58 +1000
From:	Dave Chinner <david@...morbit.com>
To:	Christian Kujau <lists@...dbynature.de>
Cc:	Markus Trippelsdorf <markus@...ppelsdorf.de>,
	LKML <linux-kernel@...r.kernel.org>, xfs@....sgi.com,
	minchan.kim@...il.com
Subject: Re: 2.6.39-rc4+: oom-killer busy killing tasks

On Sun, May 01, 2011 at 09:59:35PM -0700, Christian Kujau wrote:
> On Sun, 1 May 2011 at 18:01, Dave Chinner wrote:
> > I really don't know why the xfs inode cache is not being trimmed. I
> > really, really need to know if the XFS inode cache shrinker is
> > getting blocked or not running - do you have those sysrq-w traces
> > when near OOM I asked for a while back?
> 
> I tried to generate those via /proc/sysrq-trigger (don't have a F13/Print 
> Screen key), but the OOM killer kicks in prett fast - so fast thay my 
> debug script, trying to generate sysrq-w every second was too late and the 
> machine was already dead:
> 
>    http://nerdbynature.de/bits/2.6.39-rc4/oom/
>    * messages-10.txt.gz
>    * slabinfo-10.txt.bz2
> 
> Timeline:
>   - du(1) started at 12:25:16 (and immediately listed
>     as "blocked" task)
>   - the last sysrq-w succeeded at 12:38:05, listing kswapd0
>   - du invoked oom-killer at 12:38:06
> 
> I'll keep trying...
> 
> > scan only scanned 516 pages. I can't see it freeing many inodes
> > (there's >600,000 of them in memory) based on such a low page scan
> > number.
> 
> Not sure if this is related...this XFS filesytem I'm running du(1) on is 
> ~1 TB in size, with 918K allocated inodes, if df(1) is correct:
> 
> # df -hi /mnt/backup/
> Filesystem            Inodes   IUsed   IFree IUse% Mounted on
> /dev/mapper/wdc1         37M    918K     36M    3% /mnt/backup
> 
> > Maybe you should tweak /proc/sys/vm/vfs_cache_pressure to make it
> > reclaim vfs structures more rapidly. It might help
> 
> /proc/sys/vm/vfs_cache_pressure is currently set to '100'. You mean I 
> should increase it? To..150? 200? 1000?

Yes. Try 2 orders of magnitude as a start. i.e change it to 10000...

Cheers,

Dave.
-- 
Dave Chinner
david@...morbit.com
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ