[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <AANLkTimQ_yCV5Xdwd_D1zFM1oom2D3ZIBeSczJh4LN1T@mail.gmail.com>
Date: Mon, 14 Jun 2010 12:00:29 -0400
From: Alex Deucher <alexdeucher@...il.com>
To: "Rafael J. Wysocki" <rjw@...k.pl>
Cc: Dave Airlie <airlied@...hat.com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Andrew Morton <akpm@...ux-foundation.org>,
linux-pm@...ts.linux-foundation.org, linux-kernel@...r.kernel.org,
dri-devel <dri-devel@...ts.freedesktop.org>
Subject: Re: [Regression, post-2.6.34] Hibernation broken on machines with
radeon/KMS and r300
On Mon, Jun 14, 2010 at 10:53 AM, Rafael J. Wysocki <rjw@...k.pl> wrote:
> Alex, Dave,
>
> I'm afraid hibernation is broken on all machines using radeon/KMS with r300
> after commit ce8f53709bf440100cb9d31b1303291551cf517f
> (drm/radeon/kms/pm: rework power management). At least, I'm able to reproduce
> the symptom, which is that the machine hangs hard around the point where an
> image is created (probably during the device thaw phase), on two different
> boxes with r300 (the output of lspci from one of them is attached for
> reference, the other one is HP nx6325).
>
> Suspend to RAM appears to work fine at least on one of the affected boxes.
>
> Unfortunately, the commit above changes a lot of code and it's not too easy to
> figure out what's wrong with it and I didn't have the time to look more into
> details of this failure. However, it looks like you use .suspend() and
> .resume() callbacks as .freeze() and .thaw() which may not be 100% correct
> (in fact it looks like the "legacy" PCI suspend/resume is used, which is not
> recommended any more).
>
Does it work any better after Dave's last drm pull request? With the
latest changes, pm should not be a factor unless it's explicitly
enabled via sysfs.
Alex
> Thanks,
> Rafael
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists