lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 18 Aug 2016 10:36:51 +0200
From:	Daniel Vetter <daniel@...ll.ch>
To:	Rob Clark <robdclark@...il.com>
Cc:	Al Viro <viro@...iv.linux.org.uk>,
	Vaishali Thakkar <vaishali.thakkar@...cle.com>,
	David Airlie <airlied@...ux.ie>,
	linux-arm-msm <linux-arm-msm@...r.kernel.org>,
	"dri-devel@...ts.freedesktop.org" <dri-devel@...ts.freedesktop.org>,
	freedreno@...ts.freedesktop.org,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	Julia Lawall <julia.lawall@...6.fr>,
	Daniel Vetter <daniel@...ll.ch>
Subject: Re: Use of copy_from_user in msm_gem_submit.c while holding a
 spin_lock

On Wed, Aug 17, 2016 at 05:29:31PM -0400, Rob Clark wrote:
> On Wed, Aug 17, 2016 at 3:15 PM, Al Viro <viro@...iv.linux.org.uk> wrote:
> > On Wed, Aug 17, 2016 at 02:49:32PM -0400, Rob Clark wrote:
> >> If there is a copy_from_user() variant that will return an error
> >> instead of blocking, I think that is really what I want so I can
> >> implement a slow-path that drops the spin-lock temporarily.
> >
> > *shrug*
> >
> > pagefault_disable()/pagefault_enable() are there for purpose, so's
> > __copy_from_user_inatomic()...  Just remember that __copy_from_user_inatomic()
> > does not check if the addresses are userland ones (i.e. the caller needs
> > to check access_ok() itself) and it is *NOT* guaranteed to zero what it
> > hadn't copied over.  Currently it does zero tail on some, but not all
> > architectures; come next cycle it and it will not do that zeroing on any
> > of those.
> 
> Ok, this is what I came up with.. let me know what you think.  The
> first hunk was just to see the problem in the first place (no idea if
> other places on arm would have problems w/ that hunk so it wouldn't be
> part of my fix+cc-stable patch.. but it seems like I good idea, I
> would have discovered this issue much sooner if we had it)
> 
> ------
> diff --git a/arch/arm/include/asm/uaccess.h b/arch/arm/include/asm/uaccess.h
> index 35c9db8..ce2e182 100644
> --- a/arch/arm/include/asm/uaccess.h
> +++ b/arch/arm/include/asm/uaccess.h
> @@ -542,6 +542,7 @@ __clear_user(void __user *addr, unsigned long n)
> 
>  static inline unsigned long __must_check copy_from_user(void *to,
> const void __user *from, unsigned long n)
>  {
> +    might_fault();
>      if (access_ok(VERIFY_READ, from, n))
>          n = __copy_from_user(to, from, n);
>      else /* security hole - plug it */
> diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c
> b/drivers/gpu/drm/msm/msm_gem_submit.c
> index 5cd4e9b..3cca013 100644
> --- a/drivers/gpu/drm/msm/msm_gem_submit.c
> +++ b/drivers/gpu/drm/msm/msm_gem_submit.c
> @@ -66,6 +66,14 @@ void msm_gem_submit_free(struct msm_gem_submit *submit)
>      kfree(submit);
>  }
> 
> +static inline unsigned long __must_check
> +copy_from_user_inatomic(void *to, const void __user *from, unsigned long n)
> +{
> +    if (access_ok(VERIFY_READ, from, n))
> +        return __copy_from_user_inatomic(to, from, n);
> +    return -EFAULT;
> +}
> +
>  static int submit_lookup_objects(struct msm_gem_submit *submit,
>          struct drm_msm_gem_submit *args, struct drm_file *file)
>  {
> @@ -73,6 +81,7 @@ static int submit_lookup_objects(struct
> msm_gem_submit *submit,
>      int ret = 0;
> 
>      spin_lock(&file->table_lock);
> +    pagefault_disable();

Hm, I thought with spinlocks this isn't needed ... we have that in some
fastpaths to avoid locking inversion in i915_gem_execbuf.c, but that's
because a mutex critical section can still sleep.

> 
>      for (i = 0; i < args->nr_bos; i++) {
>          struct drm_msm_gem_submit_bo submit_bo;
> @@ -86,10 +95,15 @@ static int submit_lookup_objects(struct
> msm_gem_submit *submit,
>           */
>          submit->bos[i].flags = 0;
> 
> -        ret = copy_from_user(&submit_bo, userptr, sizeof(submit_bo));
> -        if (ret) {
> -            ret = -EFAULT;
> -            goto out_unlock;
> +        ret = copy_from_user_inatomic(&submit_bo, userptr, sizeof(submit_bo));
> +        if (unlikely(ret)) {
> +            pagefault_enable();
> +            spin_unlock(&file->table_lock);
> +            ret = copy_from_user(&submit_bo, userptr, sizeof(submit_bo));
> +            if (ret)
> +                return -EFAULT;
> +            spin_lock(&file->table_lock);
> +            pagefault_disable();
>          }
> 
>          if (submit_bo.flags & ~MSM_SUBMIT_BO_FLAGS) {
> @@ -130,6 +144,7 @@ static int submit_lookup_objects(struct
> msm_gem_submit *submit,
> 
>  out_unlock:
>      submit->nr_bos = i;
> +    pagefault_enable();
>      spin_unlock(&file->table_lock);
> 
>      return ret;
> ------
> 
> danvet suggested the doubleplus-super-evil variant of the test
> program, using an unfaulted but mmap'd gem bo passed in to submit
> ioctl for the bo's table, which has the additional fun of wanting to
> acquire the already held struct_mutex in msm_gem_fault().  Which
> needed the further fix:
> 
> ------
> diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
> index 6cd4af4..4502e4b 100644
> --- a/drivers/gpu/drm/msm/msm_gem.c
> +++ b/drivers/gpu/drm/msm/msm_gem.c
> @@ -201,6 +201,13 @@ int msm_gem_fault(struct vm_area_struct *vma,
> struct vm_fault *vmf)
>      pgoff_t pgoff;
>      int ret;
> 
> +    /* I think this should only happen if userspace tries to pass a
> +     * mmap'd but unfaulted gem bo vaddr into submit ioctl, triggering
> +     * a page fault while struct_mutex is already held
> +     */
> +    if (mutex_is_locked_by(&dev->struct_mutex, current))
> +        return VM_FAULT_SIGBUS;

This is an ok (well still horrible) heuristics for the shrinker, but for
correctness it kinda doesn't cut it. What you need to do instead is drop
all the locks, copy relocations into a temp memory area and then proceed
in the msm command submission path above.

Also reentrant mutexes are evil ;-)
-Daniel


> +
>      /* Make sure we don't parallel update on a fault, nor move or remove
>       * something from beneath our feet
>       */
> diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h
> index b2f13cf..160b635 100644
> --- a/drivers/gpu/drm/msm/msm_gem.h
> +++ b/drivers/gpu/drm/msm/msm_gem.h
> @@ -122,4 +122,18 @@ struct msm_gem_submit {
>      } bos[0];
>  };
> 
> +static inline bool mutex_is_locked_by(struct mutex *mutex,
> +        struct task_struct *task)
> +{
> +    if (!mutex_is_locked(mutex))
> +        return false;
> +
> +#if defined(CONFIG_SMP) || defined(CONFIG_DEBUG_MUTEXES)
> +    return mutex->owner == task;
> +#else
> +    /* Since UP may be pre-empted, we cannot assume that we own the lock */
> +    return false;
> +#endif
> +}
> +
>  #endif /* __MSM_GEM_H__ */
> diff --git a/drivers/gpu/drm/msm/msm_gem_shrinker.c
> b/drivers/gpu/drm/msm/msm_gem_shrinker.c
> index 283d284..39429bb 100644
> --- a/drivers/gpu/drm/msm/msm_gem_shrinker.c
> +++ b/drivers/gpu/drm/msm/msm_gem_shrinker.c
> @@ -18,19 +18,6 @@
>  #include "msm_drv.h"
>  #include "msm_gem.h"
> 
> -static bool mutex_is_locked_by(struct mutex *mutex, struct task_struct *task)
> -{
> -    if (!mutex_is_locked(mutex))
> -        return false;
> -
> -#if defined(CONFIG_SMP) || defined(CONFIG_DEBUG_MUTEXES)
> -    return mutex->owner == task;
> -#else
> -    /* Since UP may be pre-empted, we cannot assume that we own the lock */
> -    return false;
> -#endif
> -}
> -
>  static bool msm_gem_shrinker_lock(struct drm_device *dev, bool *unlock)
>  {
>      if (!mutex_trylock(&dev->struct_mutex)) {
> ------
> 
> And yes, I know mutex_is_locked_by() is not super-awesome.. for the
> shrinker I plan to get rid of this with finer grained locking,
> although I'm not sure that totally helps with the
> doubleplus-evil-userspace vs submit ioctl case.. which I think could
> still cause mischief by passing an unfaulted bo in the bos table,
> which is processed (and with a finer-grained lock scheme, locked)
> first, and then using it's vaddr for the cmds table (where it would be
> faulted after already locked)..  perhaps I can just disallow a gem bo
> to be used for either bos or cmds table in the submit ioctl (since
> there is no legit reason to allow that)
> 
> Note that there is actually no hardware with this gpu which is non-SMP
> so I don't mind kconfig to 'depends on SMP', at least for a short-term
> solution.  I think any other solution isn't going to be something that
> candidate for stable.
> 
> BR,
> -R

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ