[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <201002212147.07987.rjw@sisk.pl>
Date: Sun, 21 Feb 2010 21:47:07 +0100
From: "Rafael J. Wysocki" <rjw@...k.pl>
To: Alan Jenkins <sourcejedi.lkml@...glemail.com>
Cc: Mel Gorman <mel@....ul.ie>, hugh.dickins@...cali.co.uk,
Pavel Machek <pavel@....cz>,
pm list <linux-pm@...ts.linux-foundation.org>,
"linux-kernel" <linux-kernel@...r.kernel.org>,
Kernel Testers List <kernel-testers@...r.kernel.org>
Subject: Re: s2disk hang update
On Friday 19 February 2010, Alan Jenkins wrote:
> On 2/18/10, Rafael J. Wysocki <rjw@...k.pl> wrote:
> > On Thursday 18 February 2010, Alan Jenkins wrote:
> >> On 2/17/10, Rafael J. Wysocki <rjw@...k.pl> wrote:
> >> > On Wednesday 17 February 2010, Alan Jenkins wrote:
> >> >> On 2/16/10, Rafael J. Wysocki <rjw@...k.pl> wrote:
> >> >> > On Tuesday 16 February 2010, Alan Jenkins wrote:
> >> >> >> On 2/16/10, Alan Jenkins <sourcejedi.lkml@...glemail.com> wrote:
> >> >> >> > On 2/15/10, Rafael J. Wysocki <rjw@...k.pl> wrote:
> >> >> >> >> On Tuesday 09 February 2010, Alan Jenkins wrote:
> >> >> >> >>> Perhaps I spoke too soon. I see the same hang if I run too many
> >> >> >> >>> applications. The first hibernation fails with "not enough
> >> >> >> >>> swap"
> >> >> >> >>> as
> >> >> >> >>> expected, but the second or third attempt hangs (with the same
> >> >> >> >>> backtrace
> >> >> >> >>> as before).
> >> >> >> >>>
> >> >> >> >>> The patch definitely helps though. Without the patch, I see a
> >> >> >> >>> hang
> >> >> >> >>> the
> >> >> >> >>> first time I try to hibernate with too many applications
> >> >> >> >>> running.
> >> >> >> >>
> >> >> >> >> Well, I have an idea.
> >> >> >> >>
> >> >> >> >> Can you try to apply the appended patch in addition and see if
> >> >> >> >> that
> >> >> >> >> helps?
> >> >> >> >>
> >> >> >> >> Rafael
> >> >> >> >
> >> >> >> > It doesn't seem to help.
> >> >> >>
> >> >> >> To be clear: It doesn't stop the hang when I hibernate with too many
> >> >> >> applications.
> >> >> >>
> >> >> >> It does stop the same hang in a different case though.
> >> >> >>
> >> >> >> 1. boot with init=/bin/bash
> >> >> >> 2. run s2disk
> >> >> >> 3. cancel the s2disk
> >> >> >> 4. repeat steps 2&3
> >> >> >>
> >> >> >> With the patch, I can run 10s of iterations, with no hang.
> >> >> >> Without the patch, it soon hangs, (in disable_nonboot_cpus(), as
> >> >> >> always).
> >> >> >>
> >> >> >> That's what happens on 2.6.33-rc7. On 2.6.30, there is no problem.
> >> >> >> On 2.6.31 and 2.6.32 I don't get a hang, but dmesg shows an
> >> >> >> allocation
> >> >> >> failure after a couple of iterations ("kthreadd: page allocation
> >> >> >> failure. order:1, mode:0xd0"). It looks like it might be the same
> >> >> >> stop_machine thread allocation failure that causes the hang.
> >> >> >
> >> >> > Have you tested it alone or on top of the previous one? If you've
> >> >> > tested it
> >> >> > alone, please apply the appended one in addition to it and retest.
> >> >> >
> >> >> > Rafael
> >> >>
> >> >> I did test with both patches applied together -
> >> >>
> >> >> 1. [Update] MM / PM: Force GFP_NOIO during suspend/hibernation and
> >> >> resume
> >> >> 2. "reducing the number of pages that we're going to keep preallocated
> >> >> by
> >> >> 20%"
> >> >
> >> > In that case you can try to reduce the number of preallocated pages even
> >> > more,
> >> > ie. change "/ 5" to "/ 2" (for example) in the second patch.
> >>
> >> It still hangs if I try to hibernate a couple of times with too many
> >> applications.
> >
> > Hmm. I guess I asked that before, but is this a 32-bit or 64-bit system and
> > how much RAM is there in the box?
> >
> > Rafael
>
> EeePC 701. 32 bit. 512Mb RAM. 350Mb swap file, on a "first-gen" SSD.
Hmm. I'd try to make free_unnecessary_pages() free all of the preallocated
pages and see what happens.
Rafael
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists