lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100915122334.3fa7b35f@notabene>
Date:	Wed, 15 Sep 2010 12:23:34 +1000
From:	Neil Brown <neilb@...e.de>
To:	Rik van Riel <riel@...hat.com>
Cc:	Andrew Morton <akpm@...ux-foundation.org>,
	KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
	Wu Fengguang <fengguang.wu@...el.com>,
	KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
	linux-kernel@...r.kernel.org,
	"linux-mm@...ck.org" <linux-mm@...ck.org>
Subject: Re: Deadlock possibly caused by too_many_isolated.

On Tue, 14 Sep 2010 20:30:18 -0400
Rik van Riel <riel@...hat.com> wrote:

> On 09/14/2010 07:11 PM, Neil Brown wrote:
> 
> > Index: linux-2.6.32-SLE11-SP1/mm/vmscan.c
> > ===================================================================
> > --- linux-2.6.32-SLE11-SP1.orig/mm/vmscan.c	2010-09-15 08:37:32.000000000 +1000
> > +++ linux-2.6.32-SLE11-SP1/mm/vmscan.c	2010-09-15 08:38:57.000000000 +1000
> > @@ -1106,6 +1106,11 @@ static unsigned long shrink_inactive_lis
> >   		/* We are about to die and free our memory. Return now. */
> >   		if (fatal_signal_pending(current))
> >   			return SWAP_CLUSTER_MAX;
> > +		if (!(sc->gfp_mask&  __GFP_IO))
> > +			/* Not allowed to do IO, so mustn't wait
> > +			 * on processes that might try to
> > +			 */
> > +			return SWAP_CLUSTER_MAX;
> >   	}
> >
> >   	/*
> 
> Close.  We must also be sure that processes without __GFP_FS
> set in their gfp_mask do not wait on processes that do have
> __GFP_FS set.
> 
> Considering how many times we've run into a bug like this,
> I'm kicking myself for not having thought of it :(
> 

So maybe this?  I've added the test for __GFP_FS, and moved the test before
the congestion_wait on the basis that we really want to get back up the stack
and try the mempool ASAP.

Thanks,
NeilBrown



From: NeilBrown <neilb@...e.de>

mm: Avoid possible deadlock caused by too_many_isolated()


If too_many_isolated() returns true while performing direct reclaim we can
end up waiting for other threads to complete their direct reclaim.
If those threads are allowed to enter the FS or IO to free memory, but
this thread is not, then it is possible that those threads will be waiting on
this thread and so we get a circular deadlock.

So: if too_many_isolated() returns true when the allocation did not permit FS
or IO, fail shrink_inactive_list rather than blocking.

Signed-off-by: NeilBrown <neilb@...e.de>

--- linux-2.6.32-SLE11-SP1.orig/mm/vmscan.c	2010-09-15 08:37:32.000000000 +1000
+++ linux-2.6.32-SLE11-SP1/mm/vmscan.c	2010-09-15 12:17:16.000000000 +1000
@@ -1101,6 +1101,12 @@ static unsigned long shrink_inactive_lis
 	int lumpy_reclaim = 0;
 
 	while (unlikely(too_many_isolated(zone, file, sc))) {
+		if ((sc->gfp_mask & GFP_IOFS) != GFP_IOFS)
+			/* Not allowed to do IO, so mustn't wait
+			 * on processes that might try to
+			 */
+			return SWAP_CLUSTER_MAX;
+
 		congestion_wait(BLK_RW_ASYNC, HZ/10);
 
 		/* We are about to die and free our memory. Return now. */
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ