[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <188e0168-a45b-6e38-ac49-8b0fd41e9110@quicinc.com>
Date: Fri, 8 Dec 2023 08:33:48 -0800
From: Abhinav Kumar <quic_abhinavk@...cinc.com>
To: Dmitry Baryshkov <dmitry.baryshkov@...aro.org>
CC: <freedreno@...ts.freedesktop.org>, Rob Clark <robdclark@...il.com>,
"Sean Paul" <sean@...rly.run>,
Marijn Suijten <marijn.suijten@...ainline.org>,
"David Airlie" <airlied@...il.com>,
Daniel Vetter <daniel@...ll.ch>,
<dri-devel@...ts.freedesktop.org>, <quic_jesszhan@...cinc.com>,
<quic_parellan@...cinc.com>, <linux-arm-msm@...r.kernel.org>,
<linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v2 14/16] drm/msm/dpu: reserve cdm blocks for writeback in
case of YUV output
On 12/8/2023 3:54 AM, Dmitry Baryshkov wrote:
> On Fri, 8 Dec 2023 at 07:07, Abhinav Kumar <quic_abhinavk@...cinc.com> wrote:
>>
>> Reserve CDM blocks for writeback if the format of the output fb
>> is YUV. At the moment, the reservation is done only for writeback
>> but can easily be extended by relaxing the checks once other
>> interfaces are ready to output YUV.
>>
>> changes in v2:
>> - use needs_cdm from topology struct
>> - drop fb related checks from atomic_mode_set()
>
> It looks like this should be squashed with the patch 11. The 'unbind
> CDM' doesn't really make sense without this patch. We need to allocate
> it first, before touching it.
>
The way I was thinking was that patch just completes the
dpu_encoder_phys_cleanup() and yes it was intentionally kept ahead
because that will not kick in till hw_cdm is assigned.
Then, this patch only handles reserving/assignment of hw_cdm when needed.
That was the motivation behind this split.
>>
>> Signed-off-by: Abhinav Kumar <quic_abhinavk@...cinc.com>
>> ---
>> drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c | 27 +++++++++++++++++++++
>> 1 file changed, 27 insertions(+)
>>
>> diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
>> index 862912727925..a576e3e62429 100644
>> --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
>> +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
>> @@ -16,6 +16,7 @@
>> #include <drm/drm_crtc.h>
>> #include <drm/drm_file.h>
>> #include <drm/drm_probe_helper.h>
>> +#include <drm/drm_framebuffer.h>
>>
>> #include "msm_drv.h"
>> #include "dpu_kms.h"
>> @@ -583,6 +584,7 @@ static int dpu_encoder_virt_atomic_check(
>> struct drm_display_mode *adj_mode;
>> struct msm_display_topology topology;
>> struct dpu_global_state *global_state;
>> + struct drm_framebuffer *fb;
>> struct drm_dsc_config *dsc;
>> int i = 0;
>> int ret = 0;
>> @@ -623,6 +625,22 @@ static int dpu_encoder_virt_atomic_check(
>>
>> topology = dpu_encoder_get_topology(dpu_enc, dpu_kms, adj_mode, crtc_state, dsc);
>>
>> + /*
>> + * Use CDM only for writeback at the moment as other interfaces cannot handle it.
>> + * if writeback itself cannot handle cdm for some reason it will fail in its atomic_check()
>> + * earlier.
>> + */
>> + if (dpu_enc->disp_info.intf_type == INTF_WB && conn_state->writeback_job) {
>> + fb = conn_state->writeback_job->fb;
>> +
>> + if (fb && DPU_FORMAT_IS_YUV(to_dpu_format(msm_framebuffer_format(fb))))
>> + topology.needs_cdm = true;
>> + if (topology.needs_cdm && !dpu_enc->cur_master->hw_cdm)
>> + crtc_state->mode_changed = true;
>> + else if (!topology.needs_cdm && dpu_enc->cur_master->hw_cdm)
>> + crtc_state->mode_changed = true;
>> + }
>> +
>> /*
>> * Release and Allocate resources on every modeset
>> * Dont allocate when active is false.
>> @@ -1063,6 +1081,15 @@ static void dpu_encoder_virt_atomic_mode_set(struct drm_encoder *drm_enc,
>>
>> dpu_enc->dsc_mask = dsc_mask;
>>
>> + if (dpu_enc->disp_info.intf_type == INTF_WB && conn_state->writeback_job) {
>> + struct dpu_hw_blk *hw_cdm = NULL;
>> +
>> + dpu_rm_get_assigned_resources(&dpu_kms->rm, global_state,
>> + drm_enc->base.id, DPU_HW_BLK_CDM,
>> + &hw_cdm, 1);
>> + dpu_enc->cur_master->hw_cdm = hw_cdm ? to_dpu_hw_cdm(hw_cdm) : NULL;
>> + }
>> +
>> cstate = to_dpu_crtc_state(crtc_state);
>>
>> for (i = 0; i < num_lm; i++) {
>> --
>> 2.40.1
>>
>
>
> --
> With best wishes
>
> Dmitry
Powered by blists - more mailing lists