[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAD=FV=XbCggB6kVwE8jj3DO3GWXj=_LeXatST3S9h91kh32nEw@mail.gmail.com>
Date: Fri, 15 Apr 2022 17:12:10 -0700
From: Doug Anderson <dianders@...omium.org>
To: Dmitry Baryshkov <dmitry.baryshkov@...aro.org>
Cc: dri-devel <dri-devel@...ts.freedesktop.org>,
Robert Foss <robert.foss@...aro.org>,
Hsin-Yi Wang <hsinyi@...omium.org>,
Abhinav Kumar <quic_abhinavk@...cinc.com>,
Sankeerth Billakanti <quic_sbillaka@...cinc.com>,
Philip Chen <philipchen@...omium.org>,
Stephen Boyd <swboyd@...omium.org>,
Daniel Vetter <daniel@...ll.ch>,
David Airlie <airlied@...ux.ie>,
Sam Ravnborg <sam@...nborg.org>,
Thierry Reding <thierry.reding@...il.com>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [RFC PATCH 4/6] drm/panel-edp: Take advantage of
is_hpd_asserted() in struct drm_dp_aux
Hi,
On Fri, Apr 15, 2022 at 3:12 PM Dmitry Baryshkov
<dmitry.baryshkov@...aro.org> wrote:
>
> On Sat, 16 Apr 2022 at 00:17, Doug Anderson <dianders@...omium.org> wrote:
> >
> > Hi,
> >
> > On Thu, Apr 14, 2022 at 5:51 PM Dmitry Baryshkov
> > <dmitry.baryshkov@...aro.org> wrote:
> > >
> > > On 09/04/2022 05:36, Douglas Anderson wrote:
> > > > Let's add support for being able to read the HPD pin even if it's
> > > > hooked directly to the controller. This will allow us to get more
> > > > accurate delays also lets us take away the waiting in the AUX transfer
> > > > functions of the eDP controller drivers.
> > > >
> > > > Signed-off-by: Douglas Anderson <dianders@...omium.org>
> > > > ---
> > > >
> > > > drivers/gpu/drm/panel/panel-edp.c | 37 ++++++++++++++++++++++++++-----
> > > > 1 file changed, 31 insertions(+), 6 deletions(-)
> > > >
> > > > diff --git a/drivers/gpu/drm/panel/panel-edp.c b/drivers/gpu/drm/panel/panel-edp.c
> > > > index 1732b4f56e38..4a143eb9544b 100644
> > > > --- a/drivers/gpu/drm/panel/panel-edp.c
> > > > +++ b/drivers/gpu/drm/panel/panel-edp.c
> > > > @@ -417,6 +417,19 @@ static int panel_edp_get_hpd_gpio(struct device *dev, struct panel_edp *p)
> > > > return 0;
> > > > }
> > > >
> > > > +static bool panel_edp_can_read_hpd(struct panel_edp *p)
> > > > +{
> > > > + return !p->no_hpd && (p->hpd_gpio || (p->aux && p->aux->is_hpd_asserted));
> > > > +}
> > > > +
> > > > +static bool panel_edp_read_hpd(struct panel_edp *p)
> > > > +{
> > > > + if (p->hpd_gpio)
> > > > + return gpiod_get_value_cansleep(p->hpd_gpio);
> > > > +
> > > > + return p->aux->is_hpd_asserted(p->aux);
> > > > +}
> > > > +
> > > > static int panel_edp_prepare_once(struct panel_edp *p)
> > > > {
> > > > struct device *dev = p->base.dev;
> > > > @@ -441,13 +454,21 @@ static int panel_edp_prepare_once(struct panel_edp *p)
> > > > if (delay)
> > > > msleep(delay);
> > > >
> > > > - if (p->hpd_gpio) {
> > > > + if (panel_edp_can_read_hpd(p)) {
> > > > if (p->desc->delay.hpd_absent)
> > > > hpd_wait_us = p->desc->delay.hpd_absent * 1000UL;
> > > > else
> > > > hpd_wait_us = 2000000;
> > > >
> > > > - err = readx_poll_timeout(gpiod_get_value_cansleep, p->hpd_gpio,
> > > > + /*
> > > > + * Extra max delay, mostly to account for ps8640. ps8640
> > > > + * is crazy and the bridge chip driver itself has over 200 ms
> > > > + * of delay if it needs to do the pm_runtime resume of the
> > > > + * bridge chip to read the HPD.
> > > > + */
> > > > + hpd_wait_us += 3000000;
> > >
> > > I think this should come in a separate commit and ideally this should be
> > > configurable somehow. Other hosts wouldn't need such 'additional' delay.
> > >
> > > With this change removed:
> > >
> > > Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@...aro.org>
> >
> > What would you think about changing the API slightly? Instead of
> > is_hpd_asserted(), we change it to wait_hpd_asserted() and it takes a
> > timeout in microseconds. If you pass 0 for the timeout the function is
> > defined to behave the same as is_hpd_asserted() today--AKA a single
> > poll of the line.
>
> This might work. Can you check it, please?
Cool. I'll spin with this. Hopefully early next week unless my inbox
blows up. ...or my main PC's SSD like happened this week. ;-)
> BTW: are these changes dependent on the first part of the patchset? It
> might be worth splitting the patchset into two parts.
Definitely not. As per the cover letter, this is two series jammed
into one. I'm happy to split them up. The 2nd half seems much less
controversial.
-Doug
Powered by blists - more mailing lists