[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <201001192137.35232.rjw@sisk.pl>
Date: Tue, 19 Jan 2010 21:37:35 +0100
From: "Rafael J. Wysocki" <rjw@...k.pl>
To: Oliver Neukum <oliver@...kum.org>
Cc: Maxim Levitsky <maximlevitsky@...il.com>,
linux-pm@...ts.linux-foundation.org,
LKML <linux-kernel@...r.kernel.org>,
"linux-mm" <linux-mm@...ck.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Benjamin Herrenschmidt <benh@...nel.crashing.org>
Subject: Re: [RFC][PATCH] PM: Force GFP_NOIO during suspend/resume (was: Re: [linux-pm] Memory allocations in .suspend became very unreliable)
On Tuesday 19 January 2010, Oliver Neukum wrote:
> Am Montag, 18. Januar 2010 21:41:49 schrieb Rafael J. Wysocki:
> > On Monday 18 January 2010, Oliver Neukum wrote:
> > > Am Sonntag, 17. Januar 2010 14:55:55 schrieb Rafael J. Wysocki:
> > > > +void mm_force_noio_allocations(void)
> > > > +{
> > > > + /* Wait for all slowpath allocations using the old mask to complete */
> > > > + down_write(&gfp_allowed_mask_sem);
> > > > + saved_gfp_allowed_mask = gfp_allowed_mask;
> > > > + gfp_allowed_mask &= ~(__GFP_IO | __GFP_FS);
> > > > + up_write(&gfp_allowed_mask_sem);
> > > > +}
> > >
> > > In addition to this you probably want to exhaust all memory reserves
> > > before you fail a memory allocation
> >
> > I'm not really sure what you mean.
>
> Forget it, it was foolish. Instead there's a different problem.
> Suppose we are tight on memory. The problem is that we must not
> exhaust all memory. If we are really out of memory we may be unable
> to satisfy memory allocations in resume()
That doesn't make things any worse than the are already. If we block on
I/O forever during resume, the gross result is pretty much the same.
That said, Maxim reported that in his test case the mm subsystem apparently
attempted to use I/O even if there was a plenty of free memory available and
I'd like prevent _that_ from happening.
Rafael
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists