[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <200710052328.14423.rjw@sisk.pl>
Date: Fri, 5 Oct 2007 23:28:13 +0200
From: "Rafael J. Wysocki" <rjw@...k.pl>
To: Pavel Machek <pavel@....cz>
Cc: Stefan Richter <stefanr@...6.in-berlin.de>,
Ben Collins <ben.collins@...ntu.com>,
kernel list <linux-kernel@...r.kernel.org>,
linux1394-devel@...ts.sourceforge.net, Ingo Molnar <mingo@...e.hu>
Subject: Re: regression: fireware causes oops during system
On Friday, 5 October 2007 09:08, Pavel Machek wrote:
> Hi!
>
> > > This commit is certainly OK, as it should merely preserve status quo.
> > > Also note that Pavel wrote in his initial post that the problem became
> > > apparent way after -rc1. Full quote:
> > >
> > > | I noticed empty suspend stopped working around 2.6.23-rc4, and it is
> > > | still present in 2.6.23-rc6. To reproduce
> > > |
> > > | swapoff -a
> > > | echo disk > /sys/power/state
> > > | echo disk > /sys/power/state
> > > |
> > > | Unsetting
> > > |
> > > | CONFIG_IEEE1394=y
> > > |
> > > | solves the problem.
> >
> > Yes.
> >
> > I thought that there might be a later change that exposed a bug in it.
> >
> > Hm. Pavel, can you do
> >
> > # echo test > /sys/power/disk
> > # echo disk > /sys/power/state
> >
> > and see what happens?
>
> root@amd:~# echo test > /sys/power/disk
> root@amd:~# echo disk > /sys/power/state
> root@amd:~# echo disk > /sys/power/state
>
> Produces nothing interesting... I also did few hibernation/resume
> cycles, and everything seems to work ok. But when I do swapoff then
> try to hibernate, it fails on second try. Weird.
Weird indeed, but it means that there's some history that causes things to
break on the second attempt.
To summarize, after running swapoff the suspending of devices during the second
attempt to hibernate fails (100% of the time) unless CONFIG_IEEE1394 is unset?
What happens for CONFIG_IEEE1394=m?
Greetings,
Rafael
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists