[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87fshwxph0.wl-tiwai@suse.de>
Date: Tue, 16 Aug 2022 15:55:23 +0200
From: Takashi Iwai <tiwai@...e.de>
To: Thomas Zimmermann <tzimmermann@...e.de>
Cc: Dave Airlie <airlied@...hat.com>, Sean Paul <sean@...rly.run>,
linux-kernel@...r.kernel.org, dri-devel@...ts.freedesktop.org
Subject: Re: [PATCH 3/4] drm/udl: Kill pending URBs at suspend and disconnect
On Tue, 09 Aug 2022 11:19:30 +0200,
Takashi Iwai wrote:
>
> On Tue, 09 Aug 2022 11:13:46 +0200,
> Thomas Zimmermann wrote:
> >
> > Hi
> >
> > Am 09.08.22 um 11:03 schrieb Takashi Iwai:
> > > On Tue, 09 Aug 2022 09:41:19 +0200,
> > > Thomas Zimmermann wrote:
> > >>
> > >> Hi
> > >>
> > >> Am 09.08.22 um 09:15 schrieb Takashi Iwai:
> > >>> On Tue, 09 Aug 2022 09:13:16 +0200,
> > >>> Thomas Zimmermann wrote:
> > >>>>
> > >>>> Hi
> > >>>>
> > >>>> Am 04.08.22 um 09:58 schrieb Takashi Iwai:
> > >>>>> At both suspend and disconnect, we should rather cancel the pending
> > >>>>> URBs immediately. For the suspend case, the display will be turned
> > >>>>> off, so it makes no sense to process the rendering. And for the
> > >>>>> disconnect case, the device may be no longer accessible, hence we
> > >>>>> shouldn't do any submission.
> > >>>>>
> > >>>>> Tested-by: Thomas Zimmermann <tzimmermann@...e.de>
> > >>>>> Signed-off-by: Takashi Iwai <tiwai@...e.de>
> > >>>>> ---
> > >>>>> drivers/gpu/drm/udl/udl_drv.h | 2 ++
> > >>>>> drivers/gpu/drm/udl/udl_main.c | 25 ++++++++++++++++++++++---
> > >>>>> drivers/gpu/drm/udl/udl_modeset.c | 2 ++
> > >>>>> 3 files changed, 26 insertions(+), 3 deletions(-)
> > >>>>>
> > >>>>> diff --git a/drivers/gpu/drm/udl/udl_drv.h b/drivers/gpu/drm/udl/udl_drv.h
> > >>>>> index f01e50c5b7b7..28aaf75d71cf 100644
> > >>>>> --- a/drivers/gpu/drm/udl/udl_drv.h
> > >>>>> +++ b/drivers/gpu/drm/udl/udl_drv.h
> > >>>>> @@ -39,6 +39,7 @@ struct urb_node {
> > >>>>> struct urb_list {
> > >>>>> struct list_head list;
> > >>>>> + struct list_head in_flight;
> > >>>>> spinlock_t lock;
> > >>>>> wait_queue_head_t sleep;
> > >>>>> int available;
> > >>>>> @@ -84,6 +85,7 @@ static inline struct urb *udl_get_urb(struct drm_device *dev)
> > >>>>> int udl_submit_urb(struct drm_device *dev, struct urb *urb,
> > >>>>> size_t len);
> > >>>>> int udl_sync_pending_urbs(struct drm_device *dev);
> > >>>>> +void udl_kill_pending_urbs(struct drm_device *dev);
> > >>>>> void udl_urb_completion(struct urb *urb);
> > >>>>> int udl_init(struct udl_device *udl);
> > >>>>> diff --git a/drivers/gpu/drm/udl/udl_main.c b/drivers/gpu/drm/udl/udl_main.c
> > >>>>> index 93615648414b..47204b7eb10e 100644
> > >>>>> --- a/drivers/gpu/drm/udl/udl_main.c
> > >>>>> +++ b/drivers/gpu/drm/udl/udl_main.c
> > >>>>> @@ -135,7 +135,7 @@ void udl_urb_completion(struct urb *urb)
> > >>>>> urb->transfer_buffer_length = udl->urbs.size; /* reset to actual */
> > >>>>> spin_lock_irqsave(&udl->urbs.lock, flags);
> > >>>>> - list_add_tail(&unode->entry, &udl->urbs.list);
> > >>>>> + list_move(&unode->entry, &udl->urbs.list);
> > >>>>> udl->urbs.available++;
> > >>>>> spin_unlock_irqrestore(&udl->urbs.lock, flags);
> > >>>>> @@ -180,6 +180,7 @@ static int udl_alloc_urb_list(struct
> > >>>>> drm_device *dev, int count, size_t size)
> > >>>>> retry:
> > >>>>> udl->urbs.size = size;
> > >>>>> INIT_LIST_HEAD(&udl->urbs.list);
> > >>>>> + INIT_LIST_HEAD(&udl->urbs.in_flight);
> > >>>>> init_waitqueue_head(&udl->urbs.sleep);
> > >>>>> udl->urbs.count = 0;
> > >>>>> @@ -246,7 +247,7 @@ struct urb *udl_get_urb_timeout(struct drm_device *dev, long timeout)
> > >>>>> }
> > >>>>> unode = list_first_entry(&udl->urbs.list, struct urb_node,
> > >>>>> entry);
> > >>>>> - list_del_init(&unode->entry);
> > >>>>> + list_move(&unode->entry, &udl->urbs.in_flight);
> > >>>>> udl->urbs.available--;
> > >>>>> unlock:
> > >>>>> @@ -279,7 +280,7 @@ int udl_sync_pending_urbs(struct drm_device *dev)
> > >>>>> spin_lock_irq(&udl->urbs.lock);
> > >>>>> /* 2 seconds as a sane timeout */
> > >>>>> if (!wait_event_lock_irq_timeout(udl->urbs.sleep,
> > >>>>> - udl->urbs.available == udl->urbs.count,
> > >>>>> + list_empty(&udl->urbs.in_flight),
> > >>>>> udl->urbs.lock,
> > >>>>> msecs_to_jiffies(2000)))
> > >>>>> ret = -ETIMEDOUT;
> > >>>>> @@ -287,6 +288,23 @@ int udl_sync_pending_urbs(struct drm_device *dev)
> > >>>>> return ret;
> > >>>>> }
> > >>>>> +/* kill pending URBs */
> > >>>>> +void udl_kill_pending_urbs(struct drm_device *dev)
> > >>>>> +{
> > >>>>> + struct udl_device *udl = to_udl(dev);
> > >>>>> + struct urb_node *unode;
> > >>>>> +
> > >>>>> + spin_lock_irq(&udl->urbs.lock);
> > >>>>> + while (!list_empty(&udl->urbs.in_flight)) {
> > >>>>> + unode = list_first_entry(&udl->urbs.in_flight,
> > >>>>> + struct urb_node, entry);
> > >>>>> + spin_unlock_irq(&udl->urbs.lock);
> > >>>>> + usb_kill_urb(unode->urb);
> > >>>>> + spin_lock_irq(&udl->urbs.lock);
> > >>>>> + }
> > >>>>> + spin_unlock_irq(&udl->urbs.lock);
> > >>>>> +}
> > >>>>> +
> > >>>>> int udl_init(struct udl_device *udl)
> > >>>>> {
> > >>>>> struct drm_device *dev = &udl->drm;
> > >>>>> @@ -335,6 +353,7 @@ int udl_drop_usb(struct drm_device *dev)
> > >>>>> {
> > >>>>> struct udl_device *udl = to_udl(dev);
> > >>>>> + udl_kill_pending_urbs(dev);
> > >>>>> udl_free_urb_list(dev);
> > >>>>> put_device(udl->dmadev);
> > >>>>> udl->dmadev = NULL;
> > >>>>> diff --git a/drivers/gpu/drm/udl/udl_modeset.c b/drivers/gpu/drm/udl/udl_modeset.c
> > >>>>> index 50025606b6ad..169110d8fc2e 100644
> > >>>>> --- a/drivers/gpu/drm/udl/udl_modeset.c
> > >>>>> +++ b/drivers/gpu/drm/udl/udl_modeset.c
> > >>>>> @@ -397,6 +397,8 @@ udl_simple_display_pipe_disable(struct drm_simple_display_pipe *pipe)
> > >>>>> struct urb *urb;
> > >>>>> char *buf;
> > >>>>> + udl_kill_pending_urbs(dev);
> > >>>>> +
> > >>>>
> > >>>> I already reviewed the patchset, but I have another comment. I think
> > >>>> we should only kill urbs from within the suspend handler. Same for the
> > >>>> call to the URB-sync function in patch 2.
> > >>>>
> > >>>> This disable function is part of the regular modeset path. It's
> > >>>> probably not appropriate to outright remove pending URBs here. This
> > >>>> can lead to failed modesets, which would have succeeded otherwise.
> > >>>
> > >>> Well, the device shall be turned off right after that point, so the
> > >>> all pending rendering makes little sense, no?
> > >>
> > >> udl_simple_display_pipe_disable() only disables the display, but not
> > >> the device. The kill operation here could potentially kill some valid
> > >> modeset operation that was still going on. And who knows what the
> > >> device state is after that.
> > >
> > > But udl_simple_display_pipe_disable() invokes UDL_BLANK_MODE_POWERDOWN
> > > command right after the place I've put udl_kill_pending_urbs(). So it
> > > shall blank / turn off the power (of the device, as it has a single
> > > output). And the URB completion doesn't do any error handling but
> > > just re-links URB chain and wakes up the queue. So killing a pending
> > > URB would nothing but canceling the in-flight URBs, and there should
> > > be no disturbance to the modeset operation itself, as the screen will
> > > be blanked immediately.
> >
> > The blank mode is essentially DPMS. It's unrelated to the device's
> > display mode.
>
> The function invokes the UDL_BLANK_MODE_POWERDOWN command; that will
> discard the whole rendered picture. And, the counterpart,
> udl_simple_display_pipe_enable(), re-initializes the mode fully from
> the scratch again.
> So what's the point to continue rendering that is immediately cleared
> (from the screen and from the device state)? Killing pending URBs
> doesn't influence on the internal (modeset) state of the driver.
In anyway, this patchset seems problematic around the disconnection,
and maybe this particular one is no much improvement, better to drop
for now.
I'll resubmit the v2 patch set including your resume fixes later.
thanks,
Takashi
Powered by blists - more mailing lists