[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110113184626.GA31800@thunk.org>
Date: Thu, 13 Jan 2011 13:46:26 -0500
From: Ted Ts'o <tytso@....edu>
To: Heiko Carstens <heiko.carstens@...ibm.com>
Cc: Sebastian Ott <sebott@...ux.vnet.ibm.com>,
"linux-ext4@...r.kernel.org development" <linux-ext4@...r.kernel.org>,
LKML Kernel <linux-kernel@...r.kernel.org>,
pm list <linux-pm@...ts.linux-foundation.org>
Subject: Re: Oops while going into hibernate
On Thu, Jan 13, 2011 at 02:36:12PM +0100, Heiko Carstens wrote:
>
> Eeeek... this seems to be an architecture specific bug that is only present
> on s390.
> The dirty bit for user space pages on all architectures but s390 are stored
> into the PTE's. On s390 however they are stored into the storage key that
> exists per _physical_ page.
> So, what we should have done, when implementing suspend/resume on s390, is
> to save the storage key for each page and write that to the suspend device
> and upon resume restore the storage key contents for each physical page.
> The code that would do that is missing... Hence _all_ pages of the resumed
> image are dirty after they have been copied to their location.
> *ouch*
>
> Will fix.
Glad you found the root cause. If you don't think you can get this
fixed quickly, before -rc2 or -rc3, I can fairly quickly add some
checks to ext4 to detect this condition, issue a warning, and then
return an error code from the ->writepages() hook. (Which will then
promptly be ignored by the writeback code, since, hey, what are they
going to do with an error, but that's a discussion for another forum.)
Would that be helpful?
I'm still a bit concerned with the call to set the pages' PTE to be
dirty that I found in the hibernate code, but I accept the fact that
removing it doesn't solve the s390 crash. It still seems wrong to me,
and hopefully someone from linux-pm can look at that more closely.
- Ted
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists