[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20190108211133.32564-3-lyude@redhat.com>
Date: Tue, 8 Jan 2019 16:11:28 -0500
From: Lyude Paul <lyude@...hat.com>
To: dri-devel@...ts.freedesktop.org, amd-gfx@...ts.freedesktop.org
Cc: Harry Wentland <harry.wentland@....com>,
Jerry Zuo <Jerry.Zuo@....com>, stable@...r.kernel.org,
Leo Li <sunpeng.li@....com>,
Alex Deucher <alexander.deucher@....com>,
Christian König <christian.koenig@....com>,
"David (ChunMing) Zhou" <David1.Zhou@....com>,
David Airlie <airlied@...ux.ie>,
Daniel Vetter <daniel@...ll.ch>,
Tony Cheng <Tony.Cheng@....com>,
David Francis <David.Francis@....com>,
Nicholas Kazlauskas <nicholas.kazlauskas@....com>,
Mikita Lipski <mikita.lipski@....com>,
Bhawanpreet Lakha <Bhawanpreet.Lakha@....com>,
Anthony Koo <Anthony.Koo@....com>, linux-kernel@...r.kernel.org
Subject: [PATCH v2 2/3] drm/amdgpu: Don't fail resume process if resuming atomic state fails
This is an ugly one unfortunately. Currently, all DRM drivers supporting
atomic modesetting will save the state that userspace had set before
suspending, then attempt to restore that state on resume. This probably
worked very well at one point, like many other things, until DP MST came
into the picture. While it's easy to restore state on normal display
connectors that were disconnected during suspend regardless of their
state post-resume, this can't really be done with MST because of the
fact that setting up a downstream sink requires performing sideband
transactions between the source and the MST hub, sending out the ACT
packets, etc.
Because of this, there isn't really a guarantee that we can restore the
atomic state we had before suspend once we've resumed. This sucks pretty
bad, but so far I haven't run into any compositors that this actually
causes serious issues with. Most compositors will notice the hotplug we
send afterwards, and then reprobe state.
Since nouveau and i915 also don't fail the suspend/resume process due to
failing to restore the atomic state, let's make amdgpu match this
behavior. Better to resume the GPU properly, then to stop the process
half way because of a potentially unavoidable atomic commit failure.
Eventually, we'll have a real fix for this problem on the DRM level. But
we've got some more important low-hanging fruit to deal with first.
Signed-off-by: Lyude Paul <lyude@...hat.com>
Reviewed-by: Harry Wentland <harry.wentland@....com>
Cc: Jerry Zuo <Jerry.Zuo@....com>
Cc: <stable@...r.kernel.org> # v4.15+
---
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index 3f326a2c513b..a3e65e457348 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -912,7 +912,6 @@ static int dm_resume(void *handle)
struct drm_plane_state *new_plane_state;
struct dm_plane_state *dm_new_plane_state;
enum dc_connection_type new_connection_type = dc_connection_none;
- int ret;
int i;
/* power on hardware */
@@ -985,13 +984,13 @@ static int dm_resume(void *handle)
}
}
- ret = drm_atomic_helper_resume(ddev, dm->cached_state);
+ drm_atomic_helper_resume(ddev, dm->cached_state);
dm->cached_state = NULL;
amdgpu_dm_irq_resume_late(adev);
- return ret;
+ return 0;
}
/**
--
2.20.1
Powered by blists - more mailing lists