lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <bebcfa4a-7908-d8ba-3bff-ea7c2ee2d7a9@suse.de>
Date:   Tue, 9 Aug 2022 09:13:16 +0200
From:   Thomas Zimmermann <tzimmermann@...e.de>
To:     Takashi Iwai <tiwai@...e.de>, dri-devel@...ts.freedesktop.org
Cc:     Dave Airlie <airlied@...hat.com>, Sean Paul <sean@...rly.run>,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH 3/4] drm/udl: Kill pending URBs at suspend and disconnect

Hi

Am 04.08.22 um 09:58 schrieb Takashi Iwai:
> At both suspend and disconnect, we should rather cancel the pending
> URBs immediately.  For the suspend case, the display will be turned
> off, so it makes no sense to process the rendering.  And for the
> disconnect case, the device may be no longer accessible, hence we
> shouldn't do any submission.
> 
> Tested-by: Thomas Zimmermann <tzimmermann@...e.de>
> Signed-off-by: Takashi Iwai <tiwai@...e.de>
> ---
>   drivers/gpu/drm/udl/udl_drv.h     |  2 ++
>   drivers/gpu/drm/udl/udl_main.c    | 25 ++++++++++++++++++++++---
>   drivers/gpu/drm/udl/udl_modeset.c |  2 ++
>   3 files changed, 26 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/gpu/drm/udl/udl_drv.h b/drivers/gpu/drm/udl/udl_drv.h
> index f01e50c5b7b7..28aaf75d71cf 100644
> --- a/drivers/gpu/drm/udl/udl_drv.h
> +++ b/drivers/gpu/drm/udl/udl_drv.h
> @@ -39,6 +39,7 @@ struct urb_node {
>   
>   struct urb_list {
>   	struct list_head list;
> +	struct list_head in_flight;
>   	spinlock_t lock;
>   	wait_queue_head_t sleep;
>   	int available;
> @@ -84,6 +85,7 @@ static inline struct urb *udl_get_urb(struct drm_device *dev)
>   
>   int udl_submit_urb(struct drm_device *dev, struct urb *urb, size_t len);
>   int udl_sync_pending_urbs(struct drm_device *dev);
> +void udl_kill_pending_urbs(struct drm_device *dev);
>   void udl_urb_completion(struct urb *urb);
>   
>   int udl_init(struct udl_device *udl);
> diff --git a/drivers/gpu/drm/udl/udl_main.c b/drivers/gpu/drm/udl/udl_main.c
> index 93615648414b..47204b7eb10e 100644
> --- a/drivers/gpu/drm/udl/udl_main.c
> +++ b/drivers/gpu/drm/udl/udl_main.c
> @@ -135,7 +135,7 @@ void udl_urb_completion(struct urb *urb)
>   	urb->transfer_buffer_length = udl->urbs.size; /* reset to actual */
>   
>   	spin_lock_irqsave(&udl->urbs.lock, flags);
> -	list_add_tail(&unode->entry, &udl->urbs.list);
> +	list_move(&unode->entry, &udl->urbs.list);
>   	udl->urbs.available++;
>   	spin_unlock_irqrestore(&udl->urbs.lock, flags);
>   
> @@ -180,6 +180,7 @@ static int udl_alloc_urb_list(struct drm_device *dev, int count, size_t size)
>   retry:
>   	udl->urbs.size = size;
>   	INIT_LIST_HEAD(&udl->urbs.list);
> +	INIT_LIST_HEAD(&udl->urbs.in_flight);
>   
>   	init_waitqueue_head(&udl->urbs.sleep);
>   	udl->urbs.count = 0;
> @@ -246,7 +247,7 @@ struct urb *udl_get_urb_timeout(struct drm_device *dev, long timeout)
>   	}
>   
>   	unode = list_first_entry(&udl->urbs.list, struct urb_node, entry);
> -	list_del_init(&unode->entry);
> +	list_move(&unode->entry, &udl->urbs.in_flight);
>   	udl->urbs.available--;
>   
>   unlock:
> @@ -279,7 +280,7 @@ int udl_sync_pending_urbs(struct drm_device *dev)
>   	spin_lock_irq(&udl->urbs.lock);
>   	/* 2 seconds as a sane timeout */
>   	if (!wait_event_lock_irq_timeout(udl->urbs.sleep,
> -					 udl->urbs.available == udl->urbs.count,
> +					 list_empty(&udl->urbs.in_flight),
>   					 udl->urbs.lock,
>   					 msecs_to_jiffies(2000)))
>   		ret = -ETIMEDOUT;
> @@ -287,6 +288,23 @@ int udl_sync_pending_urbs(struct drm_device *dev)
>   	return ret;
>   }
>   
> +/* kill pending URBs */
> +void udl_kill_pending_urbs(struct drm_device *dev)
> +{
> +	struct udl_device *udl = to_udl(dev);
> +	struct urb_node *unode;
> +
> +	spin_lock_irq(&udl->urbs.lock);
> +	while (!list_empty(&udl->urbs.in_flight)) {
> +		unode = list_first_entry(&udl->urbs.in_flight,
> +					 struct urb_node, entry);
> +		spin_unlock_irq(&udl->urbs.lock);
> +		usb_kill_urb(unode->urb);
> +		spin_lock_irq(&udl->urbs.lock);
> +	}
> +	spin_unlock_irq(&udl->urbs.lock);
> +}
> +
>   int udl_init(struct udl_device *udl)
>   {
>   	struct drm_device *dev = &udl->drm;
> @@ -335,6 +353,7 @@ int udl_drop_usb(struct drm_device *dev)
>   {
>   	struct udl_device *udl = to_udl(dev);
>   
> +	udl_kill_pending_urbs(dev);
>   	udl_free_urb_list(dev);
>   	put_device(udl->dmadev);
>   	udl->dmadev = NULL;
> diff --git a/drivers/gpu/drm/udl/udl_modeset.c b/drivers/gpu/drm/udl/udl_modeset.c
> index 50025606b6ad..169110d8fc2e 100644
> --- a/drivers/gpu/drm/udl/udl_modeset.c
> +++ b/drivers/gpu/drm/udl/udl_modeset.c
> @@ -397,6 +397,8 @@ udl_simple_display_pipe_disable(struct drm_simple_display_pipe *pipe)
>   	struct urb *urb;
>   	char *buf;
>   
> +	udl_kill_pending_urbs(dev);
> +

I already reviewed the patchset, but I have another comment. I think we 
should only kill urbs from within the suspend handler. Same for the call 
to the URB-sync function in patch 2.

This disable function is part of the regular modeset path. It's probably 
not appropriate to outright remove pending URBs here. This can lead to 
failed modesets, which would have succeeded otherwise.

Best regards
Thomas

>   	urb = udl_get_urb(dev);
>   	if (!urb)
>   		return;

-- 
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Ivo Totev

Download attachment "OpenPGP_signature" of type "application/pgp-signature" (841 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ