lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 4 May 2011 21:12:11 +1000
From:	Dave Chinner <david@...morbit.com>
To:	Christian Kujau <lists@...dbynature.de>
Cc:	Markus Trippelsdorf <markus@...ppelsdorf.de>,
	LKML <linux-kernel@...r.kernel.org>, xfs@....sgi.com,
	minchan.kim@...il.com
Subject: Re: 2.6.39-rc4+: oom-killer busy killing tasks

On Wed, May 04, 2011 at 05:36:15PM +1000, Dave Chinner wrote:
> On Tue, May 03, 2011 at 05:46:14PM -0700, Christian Kujau wrote:
> > And another one, please see the files marked with 15- here:
> > 
> >    https://trent.utfs.org/p/bits/2.6.39-rc4/oom/trace/
> > 
> > I tried to have more concise timestamps in each of these files, hope that 
> > helps. Sadly though, trace-cmd reports still segfaults on the tracefile.
> 
> Ok, that will be helpful. Also helpful is that I've (FINALLY!)
> reproduced this myself, and i think i can now reproduce it at will
> on a highmem i686 machine. I'll look into it more later tonight....

And here's a patch for you to try. It fixes the problem on my test
machine.....

Cheers,

Dave.
-- 
Dave Chinner
david@...morbit.com

xfs: ensure reclaim cursor is reset correctly at end of AG

From: Dave Chinner <dchinner@...hat.com>

On a 32 bit highmem PowerPC machine, the XFS inode cache was growing
without bound and exhausting low memory causing the OOM killer to be
triggered. After some effort, the problem was reproduced on a 32 bit
x86 highmem machine.

The problem is that the per-ag inode reclaim index cursor was not
getting reset to the start of the AG if the radix tree tag lookup
found no more reclaimable inodes. Hence every further reclaim
attempt started at the same index beyond where any reclaimable
inodes lay, and no further background reclaim ever occurred from the
AG.

Without background inode reclaim the VM driven cache shrinker
simply cannot keep up with cache growth, and OOM is the result.

While the change that exposed the problem was the conversion of the
inode reclaim to use work queues for background reclaim, it was not
the cause of the bug. The bug was introduced when the cursor code
was added, just waiting for some weird configuration to strike....

Signed-off-by: Dave Chinner <dchinner@...hat.com>
---
 fs/xfs/linux-2.6/xfs_sync.c |    1 +
 1 files changed, 1 insertions(+), 0 deletions(-)

diff --git a/fs/xfs/linux-2.6/xfs_sync.c b/fs/xfs/linux-2.6/xfs_sync.c
index 3253572..4e1f23a 100644
--- a/fs/xfs/linux-2.6/xfs_sync.c
+++ b/fs/xfs/linux-2.6/xfs_sync.c
@@ -936,6 +936,7 @@ restart:
 					XFS_LOOKUP_BATCH,
 					XFS_ICI_RECLAIM_TAG);
 			if (!nr_found) {
+				done = 1;
 				rcu_read_unlock();
 				break;
 			}
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ