[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ebea664b-6f53-440a-aa2f-4d4991fb5e1f@quicinc.com>
Date: Fri, 14 Feb 2025 16:25:08 -0800
From: Jessica Zhang <quic_jesszhan@...cinc.com>
To: Jun Nie <jun.nie@...aro.org>, Rob Clark <robdclark@...il.com>,
"Abhinav
Kumar" <quic_abhinavk@...cinc.com>,
Dmitry Baryshkov
<dmitry.baryshkov@...aro.org>,
Sean Paul <sean@...rly.run>,
Marijn Suijten
<marijn.suijten@...ainline.org>,
David Airlie <airlied@...il.com>, "Simona
Vetter" <simona@...ll.ch>
CC: <linux-arm-msm@...r.kernel.org>, <dri-devel@...ts.freedesktop.org>,
<freedreno@...ts.freedesktop.org>, <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v5 04/15] drm/msm/dpu: polish log for resource allocation
On 1/17/2025 8:00 AM, Jun Nie wrote:
> It is more likely that resource allocation may fail in complex usage
> case, such as quad-pipe case, than existing usage cases.
> A resource type ID is printed on failure in the current implementation,
> but the raw ID number is not explicit enough to help easily understand
> which resource caused the failure, so add a table to match the type ID
> to an human readable resource name and use it in the error print.
>
> Signed-off-by: Jun Nie <jun.nie@...aro.org>
Reviewed-by: Jessica Zhang <quic_jesszhan@...cinc.com>
> ---
> drivers/gpu/drm/msm/disp/dpu1/dpu_rm.c | 23 +++++++++++++++++++----
> 1 file changed, 19 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_rm.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_rm.c
> index a67ad58acd99f..24e085437039e 100644
> --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_rm.c
> +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_rm.c
> @@ -802,6 +802,21 @@ void dpu_rm_release_all_sspp(struct dpu_global_state *global_state,
> ARRAY_SIZE(global_state->sspp_to_crtc_id), crtc_id);
> }
>
> +static char *dpu_hw_blk_type_name[] = {
> + [DPU_HW_BLK_TOP] = "TOP",
> + [DPU_HW_BLK_SSPP] = "SSPP",
> + [DPU_HW_BLK_LM] = "LM",
> + [DPU_HW_BLK_CTL] = "CTL",
> + [DPU_HW_BLK_PINGPONG] = "pingpong",
> + [DPU_HW_BLK_INTF] = "INTF",
> + [DPU_HW_BLK_WB] = "WB",
> + [DPU_HW_BLK_DSPP] = "DSPP",
> + [DPU_HW_BLK_MERGE_3D] = "merge_3d",
> + [DPU_HW_BLK_DSC] = "DSC",
> + [DPU_HW_BLK_CDM] = "CDM",
> + [DPU_HW_BLK_MAX] = "unknown",
> +};
> +
> /**
> * dpu_rm_get_assigned_resources - Get hw resources of the given type that are
> * assigned to this encoder
> @@ -862,13 +877,13 @@ int dpu_rm_get_assigned_resources(struct dpu_rm *rm,
> continue;
>
> if (num_blks == blks_size) {
> - DPU_ERROR("More than %d resources assigned to enc %d\n",
> - blks_size, enc_id);
> + DPU_ERROR("More than %d %s assigned to enc %d\n",
> + blks_size, dpu_hw_blk_type_name[type], enc_id);
> break;
> }
> if (!hw_blks[i]) {
> - DPU_ERROR("Allocated resource %d unavailable to assign to enc %d\n",
> - type, enc_id);
> + DPU_ERROR("%s unavailable to assign to enc %d\n",
> + dpu_hw_blk_type_name[type], enc_id);
> break;
> }
> blks[num_blks++] = hw_blks[i];
>
> --
> 2.34.1
>
Powered by blists - more mailing lists