[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YDNHHk+9uIMGKZVF@pendragon.ideasonboard.com>
Date: Mon, 22 Feb 2021 07:54:38 +0200
From: Laurent Pinchart <laurent.pinchart@...asonboard.com>
To: Jagan Teki <jagan@...rulasolutions.com>
Cc: Maxime Ripard <mripard@...nel.org>, Chen-Yu Tsai <wens@...e.org>,
Jernej Skrabec <jernej.skrabec@...l.net>,
dri-devel@...ts.freedesktop.org,
linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
linux-amarula@...rulasolutions.com,
Maarten Lankhorst <maarten.lankhorst@...ux.intel.com>,
Thomas Zimmermann <tzimmermann@...e.de>
Subject: Re: [PATCH v3 5/7] drm: bridge: Queue the bridge chain instead of
stacking
Hi Jagan,
Thank you for the patch.
On Mon, Feb 15, 2021 at 01:11:00AM +0530, Jagan Teki wrote:
> drm_bridge_attach has stacked the bridge chain, so the bridge
> that gets pushed last can trigger its bridge function pre_enable
> first from drm_atomic_bridge_chain_pre_enable.
>
> This indeed gives a chance to trigger slave bridge pre_enable
> first without triggering its host bridge pre_enable for the
> usual host to slave device model like DSI host with panel slave.
>
> For fully enabled bridge drivers, host bridge pre_enable has all
> host related clock, reset, PHY configuration code that needs to
> initialized before sending commands or configuration from a slave
> to communicate its host.
>
> Queue the bridge chain instead of stacking it so-that the bridges
> that got enqueued first can have a chance to trigger first.
First of all, won't thus break all the drivers that currently rely on
the existing behaviour ?
> Cc: Maarten Lankhorst <maarten.lankhorst@...ux.intel.com>
> Cc: Thomas Zimmermann <tzimmermann@...e.de>
> Signed-off-by: Jagan Teki <jagan@...rulasolutions.com>
> ---
> Changes for v3:
> - new patch
>
> drivers/gpu/drm/drm_bridge.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/gpu/drm/drm_bridge.c b/drivers/gpu/drm/drm_bridge.c
> index 64f0effb52ac..e75d1a080c55 100644
> --- a/drivers/gpu/drm/drm_bridge.c
> +++ b/drivers/gpu/drm/drm_bridge.c
> @@ -191,9 +191,9 @@ int drm_bridge_attach(struct drm_encoder *encoder, struct drm_bridge *bridge,
> bridge->encoder = encoder;
>
> if (previous)
> - list_add(&bridge->chain_node, &previous->chain_node);
> + list_add_tail(&bridge->chain_node, &previous->chain_node);
> else
> - list_add(&bridge->chain_node, &encoder->bridge_chain);
> + list_add_tail(&bridge->chain_node, &encoder->bridge_chain);
Then, this will create a really weird order, as the list will contain
bridges in the reverse order. Assuming three bridges, A, B and C, which
are connected at the hardware level as follows:
Encoder -> A -> B -> C
the list would contain
Encoder -> C -> B -> A
This isn't intuitive, and if you want to reverse the order in which
bridge operations are called, it would be better to do so in the
operations themselves, for instance replacing
list_for_each_entry_reverse() with list_for_each_entry() in
drm_atomic_bridge_chain_pre_enable(). Still, this will likely break
drivers that depend on the existing order, so I don't think that's an
acceptable solution as-is.
>
> if (bridge->funcs->attach) {
> ret = bridge->funcs->attach(bridge, flags);
--
Regards,
Laurent Pinchart
Powered by blists - more mailing lists