[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <621ccd83-033d-472b-9072-266ab6978058@amd.com>
Date: Tue, 3 Sep 2024 17:28:11 -0400
From: Harry Wentland <harry.wentland@....com>
To: tjakobi@...h.uni-bielefeld.de, Leo Li <sunpeng.li@....com>,
Rodrigo Siqueira <Rodrigo.Siqueira@....com>,
Alex Deucher <alexander.deucher@....com>,
Christian König <christian.koenig@....com>,
"Pan, Xinhui" <Xinhui.Pan@....com>, David Airlie <airlied@...il.com>,
Daniel Vetter <daniel@...ll.ch>,
Mario Limonciello <mario.limonciello@....com>
Cc: amd-gfx@...ts.freedesktop.org, dri-devel@...ts.freedesktop.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/2] drm/amd/display: Avoid race between dcn35_set_drr()
and dc_state_destruct()
On 2024-09-02 05:40, tjakobi@...h.uni-bielefeld.de wrote:
> From: Tobias Jakobi <tjakobi@...h.uni-bielefeld.de>
>
> dc_state_destruct() nulls the resource context of the DC state. The pipe
> context passed to dcn35_set_drr() is a member of this resource context.
>
> If dc_state_destruct() is called parallel to the IRQ processing (which
> calls dcn35_set_drr() at some point), we can end up using already nulled
> function callback fields of struct stream_resource.
>
> The logic in dcn35_set_drr() already tries to avoid this, by checking tg
> against NULL. But if the nulling happens exactly after the NULL check and
> before the next access, then we get a race.
>
> Avoid this by copying tg first to a local variable, and then use this
> variable for all the operations. This should work, as long as nobody
> frees the resource pool where the timing generators live.
>
> Closes: https://gitlab.freedesktop.org/drm/amd/-/issues/3142
> Fixes: 06ad7e164256 ("drm/amd/display: Destroy DC context while keeping DML and DML2")
> Signed-off-by: Tobias Jakobi <tjakobi@...h.uni-bielefeld.de>
Reviewed-by: Harry Wentland <harry.wentland@....com>
Harry
> ---
> .../amd/display/dc/hwss/dcn35/dcn35_hwseq.c | 20 +++++++++++--------
> 1 file changed, 12 insertions(+), 8 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_hwseq.c
> index dcced89c07b3..4e77728dac10 100644
> --- a/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_hwseq.c
> +++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_hwseq.c
> @@ -1370,7 +1370,13 @@ void dcn35_set_drr(struct pipe_ctx **pipe_ctx,
> params.vertical_total_mid_frame_num = adjust.v_total_mid_frame_num;
>
> for (i = 0; i < num_pipes; i++) {
> - if ((pipe_ctx[i]->stream_res.tg != NULL) && pipe_ctx[i]->stream_res.tg->funcs) {
> + /* dc_state_destruct() might null the stream resources, so fetch tg
> + * here first to avoid a race condition. The lifetime of the pointee
> + * itself (the timing_generator object) is not a problem here.
> + */
> + struct timing_generator *tg = pipe_ctx[i]->stream_res.tg;
> +
> + if ((tg != NULL) && tg->funcs) {
> struct dc_crtc_timing *timing = &pipe_ctx[i]->stream->timing;
> struct dc *dc = pipe_ctx[i]->stream->ctx->dc;
>
> @@ -1383,14 +1389,12 @@ void dcn35_set_drr(struct pipe_ctx **pipe_ctx,
> num_frames = 2 * (frame_rate % 60);
> }
> }
> - if (pipe_ctx[i]->stream_res.tg->funcs->set_drr)
> - pipe_ctx[i]->stream_res.tg->funcs->set_drr(
> - pipe_ctx[i]->stream_res.tg, ¶ms);
> + if (tg->funcs->set_drr)
> + tg->funcs->set_drr(tg, ¶ms);
> if (adjust.v_total_max != 0 && adjust.v_total_min != 0)
> - if (pipe_ctx[i]->stream_res.tg->funcs->set_static_screen_control)
> - pipe_ctx[i]->stream_res.tg->funcs->set_static_screen_control(
> - pipe_ctx[i]->stream_res.tg,
> - event_triggers, num_frames);
> + if (tg->funcs->set_static_screen_control)
> + tg->funcs->set_static_screen_control(
> + tg, event_triggers, num_frames);
> }
> }
> }
Powered by blists - more mailing lists