[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100915023735.GA9175@localhost>
Date: Wed, 15 Sep 2010 10:37:35 +0800
From: Wu Fengguang <fengguang.wu@...el.com>
To: Neil Brown <neilb@...e.de>
Cc: Rik van Riel <riel@...hat.com>,
Andrew Morton <akpm@...ux-foundation.org>,
KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-mm@...ck.org" <linux-mm@...ck.org>
Subject: Re: Deadlock possibly caused by too_many_isolated.
On Wed, Sep 15, 2010 at 10:23:34AM +0800, Neil Brown wrote:
> On Tue, 14 Sep 2010 20:30:18 -0400
> Rik van Riel <riel@...hat.com> wrote:
>
> > On 09/14/2010 07:11 PM, Neil Brown wrote:
> >
> > > Index: linux-2.6.32-SLE11-SP1/mm/vmscan.c
> > > ===================================================================
> > > --- linux-2.6.32-SLE11-SP1.orig/mm/vmscan.c 2010-09-15 08:37:32.000000000 +1000
> > > +++ linux-2.6.32-SLE11-SP1/mm/vmscan.c 2010-09-15 08:38:57.000000000 +1000
> > > @@ -1106,6 +1106,11 @@ static unsigned long shrink_inactive_lis
> > > /* We are about to die and free our memory. Return now. */
> > > if (fatal_signal_pending(current))
> > > return SWAP_CLUSTER_MAX;
> > > + if (!(sc->gfp_mask& __GFP_IO))
> > > + /* Not allowed to do IO, so mustn't wait
> > > + * on processes that might try to
> > > + */
> > > + return SWAP_CLUSTER_MAX;
> > > }
> > >
> > > /*
> >
> > Close. We must also be sure that processes without __GFP_FS
> > set in their gfp_mask do not wait on processes that do have
> > __GFP_FS set.
> >
> > Considering how many times we've run into a bug like this,
> > I'm kicking myself for not having thought of it :(
> >
>
> So maybe this? I've added the test for __GFP_FS, and moved the test before
> the congestion_wait on the basis that we really want to get back up the stack
> and try the mempool ASAP.
The patch may well fail the !__GFP_IO page allocation and then
quickly exhaust the mempool.
Another approach may to let too_many_isolated() use much higher
thresholds for !__GFP_IO/FS and lower ones for __GFP_IO/FS. ie. to
allow at least nr2 NOIO/FS tasks to be blocked independent of the
IO/FS ones. Since NOIO vmscans typically completes fast, it will then
very hard to accumulate enough NOIO processes to be actually blocked.
IO/FS tasks NOIO/FS tasks full
block here block here LRU size
|-----------------|--------------------------|-----------------------|
| nr1 | nr2 |
Thanks,
Fengguang
>
> From: NeilBrown <neilb@...e.de>
>
> mm: Avoid possible deadlock caused by too_many_isolated()
>
>
> If too_many_isolated() returns true while performing direct reclaim we can
> end up waiting for other threads to complete their direct reclaim.
> If those threads are allowed to enter the FS or IO to free memory, but
> this thread is not, then it is possible that those threads will be waiting on
> this thread and so we get a circular deadlock.
>
> So: if too_many_isolated() returns true when the allocation did not permit FS
> or IO, fail shrink_inactive_list rather than blocking.
>
> Signed-off-by: NeilBrown <neilb@...e.de>
>
> --- linux-2.6.32-SLE11-SP1.orig/mm/vmscan.c 2010-09-15 08:37:32.000000000 +1000
> +++ linux-2.6.32-SLE11-SP1/mm/vmscan.c 2010-09-15 12:17:16.000000000 +1000
> @@ -1101,6 +1101,12 @@ static unsigned long shrink_inactive_lis
> int lumpy_reclaim = 0;
>
> while (unlikely(too_many_isolated(zone, file, sc))) {
> + if ((sc->gfp_mask & GFP_IOFS) != GFP_IOFS)
> + /* Not allowed to do IO, so mustn't wait
> + * on processes that might try to
> + */
> + return SWAP_CLUSTER_MAX;
> +
> congestion_wait(BLK_RW_ASYNC, HZ/10);
>
> /* We are about to die and free our memory. Return now. */
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists