[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1233620590.18767.138.camel@pasglop>
Date: Tue, 03 Feb 2009 11:23:09 +1100
From: Benjamin Herrenschmidt <benh@...nel.crashing.org>
To: "Rafael J. Wysocki" <rjw@...k.pl>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Jesse Barnes <jesse.barnes@...el.com>,
Andreas Schwab <schwab@...e.de>, Len Brown <lenb@...nel.org>,
Ingo Molnar <mingo@...e.hu>
Subject: Re: PCI PM: Restore standard config registers of all devices early
> But we can't tell no one is holding the mutex in question, AFAICS.
>
> I'm afraid we'd really need a special "no mutexes, no GFP_KERNEL allocations"
> code path for that.
No, the mutex shouldn't be held already, if it is, you're probably
already in deep trouble. IE, you probably want to enfore that anyway,
ie, it wouldn't be very sane to suspend the machine while ACPI was
already in the -middle- of interpreting something anyway.
IE, you should have something to ensure, before you turn interrupts off,
that nobody else is inside the AML interpreter. You already know there
are no other CPUs, so it's just a matter of making sure no other process
has scheduled while holding that mutex.
The easy way to do that is to do something like taking the mutex
yourself and then setting a flag so that the intepreter stops trying to
take it or release it itself, maybe just using the global system state.
Then release the mutex on resume.
All of these are issues that exist today. IE. Regardless of that
powermac problem, which is unrelated (see other posts), I think these
things need to be sorted cleanly or suspend will not be as rock solid as
it could/should be. IE. It's several order of magnitude better than it
was, I agree, but I believe we have here a few reasonably simple things
we can/should do to make it more robust.
Cheers,
Ben.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists