[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <97bfce90-46d9-0ab0-e437-ce1e43b01b52@gmail.com>
Date: Mon, 31 May 2021 21:25:59 +0300
From: Dmitry Osipenko <digetx@...il.com>
To: Thierry Reding <thierry.reding@...il.com>
Cc: Jonathan Hunter <jonathanh@...dia.com>,
Matt Merhar <mattmerhar@...tonmail.com>,
Peter Geis <pgwipeout@...il.com>,
Nicolas Chauvet <kwizart@...il.com>,
Michał Mirosław <mirq-linux@...e.qmqm.pl>,
linux-tegra@...r.kernel.org, linux-pm@...r.kernel.org,
linux-kernel@...r.kernel.org, dri-devel@...ts.freedesktop.org
Subject: Re: [PATCH v17 1/2] drm/tegra: dc: Support memory bandwidth
management
31.05.2021 17:13, Thierry Reding пишет:
> On Tue, May 11, 2021 at 02:27:08AM +0300, Dmitry Osipenko wrote:
>> Display controller (DC) performs isochronous memory transfers, and thus,
>> has a requirement for a minimum memory bandwidth that shall be fulfilled,
>> otherwise framebuffer data can't be fetched fast enough and this results
>> in a DC's data-FIFO underflow that follows by a visual corruption.
>>
>> The Memory Controller drivers provide facility for memory bandwidth
>> management via interconnect API. Let's wire up the interconnect API
>> support to the DC driver in order to fix the distorted display output
>> on T30 Ouya, T124 TK1 and other Tegra devices.
>>
>> Tested-by: Peter Geis <pgwipeout@...il.com> # Ouya T30
>> Tested-by: Matt Merhar <mattmerhar@...tonmail.com> # Ouya T30
>> Tested-by: Nicolas Chauvet <kwizart@...il.com> # PAZ00 T20 and TK1 T124
>> Signed-off-by: Dmitry Osipenko <digetx@...il.com>
>> ---
>> drivers/gpu/drm/tegra/Kconfig | 1 +
>> drivers/gpu/drm/tegra/dc.c | 352 ++++++++++++++++++++++++++++++++++
>> drivers/gpu/drm/tegra/dc.h | 14 ++
>> drivers/gpu/drm/tegra/drm.c | 14 ++
>> drivers/gpu/drm/tegra/hub.c | 3 +
>> drivers/gpu/drm/tegra/plane.c | 116 +++++++++++
>> drivers/gpu/drm/tegra/plane.h | 15 ++
>> 7 files changed, 515 insertions(+)
>>
> [...]
>> diff --git a/drivers/gpu/drm/tegra/dc.c b/drivers/gpu/drm/tegra/dc.c
> [...]
>> @@ -2011,7 +2143,215 @@ static void tegra_crtc_atomic_flush(struct drm_crtc *crtc,
>> value = tegra_dc_readl(dc, DC_CMD_STATE_CONTROL);
>> }
>>
>> +static bool tegra_plane_is_cursor(const struct drm_plane_state *state)
>> +{
>> + const struct tegra_dc_soc_info *soc = to_tegra_dc(state->crtc)->soc;
>> + const struct drm_format_info *fmt = state->fb->format;
>> + unsigned int src_w = drm_rect_width(&state->src) >> 16;
>> + unsigned int dst_w = drm_rect_width(&state->dst);
>> +
>> + if (state->plane->type != DRM_PLANE_TYPE_CURSOR)
>> + return false;
>> +
>> + if (soc->supports_cursor)
>> + return true;
>> +
>> + if (src_w != dst_w || fmt->num_planes != 1 || src_w * fmt->cpp[0] > 256)
>> + return false;
>
> Technically there could be some random overlay window that matches these
> conditions and is erroneously detected as being a cursor.
The "random overlay window" with the DRM_PLANE_TYPE_CURSOR could happen
only for the oldest Tegras. It's not a problem at all since everything
will work properly anyways because we skip only the small sized plane
that doesn't contribute to the BW requirement.
> I wonder if we
> should add a field to a plane that marks it as being used as cursor for
> the cases where we don't support a hardware cursor.
I don't think that we have information about how plane is used in the
driver. DRM core should know this, tegra-drm not.
It's unpractical to worry about this case, hence I think it's better to
leave this part as-is for now, if you don't mind.
> [...]
>> diff --git a/drivers/gpu/drm/tegra/dc.h b/drivers/gpu/drm/tegra/dc.h
>> index 29f19c3c6149..db10af097033 100644
>> --- a/drivers/gpu/drm/tegra/dc.h
>> +++ b/drivers/gpu/drm/tegra/dc.h
>> @@ -15,6 +15,8 @@
>>
>> struct tegra_output;
>>
>> +#define TEGRA_DC_LEGACY_PLANES_NUM 7
>> +
>> struct tegra_dc_state {
>> struct drm_crtc_state base;
>>
>> @@ -23,6 +25,8 @@ struct tegra_dc_state {
>> unsigned int div;
>>
>> u32 planes;
>> +
>> + unsigned long plane_peak_bw[TEGRA_DC_LEGACY_PLANES_NUM];
>
> Why can we not store this peak bandwidth value within the plane state? I
> know that this isn't exactly per-plane data because it depends on the
> state of other planes, but that doesn't really prevent the value to live
> within the plane state. The plane state is, after all, part of the
> global state, and hence any such state needs to be considered within the
> context of that global atomic state.
>
> I suppose that might make it a little bit more difficult to get at the
> data, but I think the end result would be less confusing than having an
> array here with potentially unused fields. It would also get rid of the
> need to look up planes by their per-CRTC index.
The reason I stored the peak_bw in CRTC state is because it feels more
natural to me somehow. It shouldn't be a problem to move it to the
planes state. I'll prepare the new version shortly.
...
>> static const struct drm_mode_config_helper_funcs
>> diff --git a/drivers/gpu/drm/tegra/hub.c b/drivers/gpu/drm/tegra/hub.c
>> index bfae8a02f55b..f1bbc5991854 100644
>> --- a/drivers/gpu/drm/tegra/hub.c
>> +++ b/drivers/gpu/drm/tegra/hub.c
>> @@ -358,6 +358,9 @@ static int tegra_shared_plane_atomic_check(struct drm_plane *plane,
>> struct tegra_dc *dc = to_tegra_dc(new_plane_state->crtc);
>> int err;
>>
>> + plane_state->peak_memory_bandwidth = 0;
>> + plane_state->avg_memory_bandwidth = 0;
>> +
>
> Since ICC isn't supported yet on Tegra186 and later, does it even make
> sense to initialize these?
I added it for consistency, right now it's not needed for Tegra186+.
Powered by blists - more mailing lists