lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 25 Nov 2020 13:37:06 -0500
From:   Lyude Paul <lyude@...hat.com>
To:     Jeremy Cline <jcline@...hat.com>, Ben Skeggs <bskeggs@...hat.com>
Cc:     Karol Herbst <kherbst@...hat.com>, David Airlie <airlied@...ux.ie>,
        nouveau@...ts.freedesktop.org, dri-devel@...ts.freedesktop.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/3] drm/nouveau: Add a dedicated mutex for the clients
 list

On Tue, 2020-11-03 at 14:49 -0500, Jeremy Cline wrote:
> Rather than protecting the nouveau_drm clients list with the lock within
> the "client" nouveau_cli, add a dedicated lock to serialize access to
> the list. This is both clearer and necessary to avoid lockdep being
> upset with us when we need to iterate through all the clients in the
> list and potentially lock their mutex, which is the same class as the
> lock protecting the entire list.
> 
> Signed-off-by: Jeremy Cline <jcline@...hat.com>
> ---
>  drivers/gpu/drm/nouveau/nouveau_drm.c | 9 +++++----
>  drivers/gpu/drm/nouveau/nouveau_drv.h | 5 +++++
>  2 files changed, 10 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/gpu/drm/nouveau/nouveau_drm.c
> b/drivers/gpu/drm/nouveau/nouveau_drm.c
> index 4fe4d664c5f2..d182b877258a 100644
> --- a/drivers/gpu/drm/nouveau/nouveau_drm.c
> +++ b/drivers/gpu/drm/nouveau/nouveau_drm.c
> @@ -557,6 +557,7 @@ nouveau_drm_device_init(struct drm_device *dev)
>                 nvkm_dbgopt(nouveau_debug, "DRM");
>  
>         INIT_LIST_HEAD(&drm->clients);
> +       mutex_init(&drm->clients_lock);

Looks like you forgot to hook up mutex_destroy() somewhere. Note there's
actually plenty of code in nouveau right now that forgets to do this, but at
some point we should probably fix that and avoid adding more spots where there's
no mutex_destroy().

>         spin_lock_init(&drm->tile.lock);
>  
>         /* workaround an odd issue on nvc1 by disabling the device's
> @@ -1089,9 +1090,9 @@ nouveau_drm_open(struct drm_device *dev, struct drm_file
> *fpriv)
>  
>         fpriv->driver_priv = cli;
>  
> -       mutex_lock(&drm->client.mutex);
> +       mutex_lock(&drm->clients_lock);
>         list_add(&cli->head, &drm->clients);
> -       mutex_unlock(&drm->client.mutex);
> +       mutex_unlock(&drm->clients_lock);
>  
>  done:
>         if (ret && cli) {
> @@ -1117,9 +1118,9 @@ nouveau_drm_postclose(struct drm_device *dev, struct
> drm_file *fpriv)
>                 nouveau_abi16_fini(cli->abi16);
>         mutex_unlock(&cli->mutex);
>  
> -       mutex_lock(&drm->client.mutex);
> +       mutex_lock(&drm->clients_lock);
>         list_del(&cli->head);
> -       mutex_unlock(&drm->client.mutex);
> +       mutex_unlock(&drm->clients_lock);
>  
>         nouveau_cli_fini(cli);
>         kfree(cli);
> diff --git a/drivers/gpu/drm/nouveau/nouveau_drv.h
> b/drivers/gpu/drm/nouveau/nouveau_drv.h
> index 9d04d1b36434..550e5f335146 100644
> --- a/drivers/gpu/drm/nouveau/nouveau_drv.h
> +++ b/drivers/gpu/drm/nouveau/nouveau_drv.h
> @@ -141,6 +141,11 @@ struct nouveau_drm {
>  
>         struct list_head clients;
>  
> +       /**
> +        * @clients_lock: Protects access to the @clients list of &struct
> nouveau_cli.
> +        */
> +       struct mutex clients_lock;
> +
>         u8 old_pm_cap;
>  
>         struct {

-- 
Sincerely,
   Lyude Paul (she/her)
   Software Engineer at Red Hat
   
Note: I deal with a lot of emails and have a lot of bugs on my plate. If you've
asked me a question, are waiting for a review/merge on a patch, etc. and I
haven't responded in a while, please feel free to send me another email to check
on my status. I don't bite!

Powered by blists - more mailing lists