lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 31 May 2022 14:32:19 +0200
From:   Daniel Vetter <daniel@...ll.ch>
To:     Rob Clark <robdclark@...il.com>
Cc:     Thomas Zimmermann <tzimmermann@...e.de>,
        Rob Clark <robdclark@...omium.org>,
        David Airlie <airlied@...ux.ie>,
        linux-arm-msm <linux-arm-msm@...r.kernel.org>,
        open list <linux-kernel@...r.kernel.org>,
        dri-devel <dri-devel@...ts.freedesktop.org>,
        Gerd Hoffmann <kraxel@...hat.com>,
        freedreno <freedreno@...ts.freedesktop.org>
Subject: Re: [PATCH] drm/prime: Ensure mmap offset is initialized

On Mon, 30 May 2022 at 17:41, Rob Clark <robdclark@...il.com> wrote:
>
> On Mon, May 30, 2022 at 7:49 AM Daniel Vetter <daniel@...ll.ch> wrote:
> >
> > On Mon, 30 May 2022 at 15:54, Rob Clark <robdclark@...il.com> wrote:
> > >
> > > On Mon, May 30, 2022 at 12:26 AM Thomas Zimmermann <tzimmermann@...e.de> wrote:
> > > >
> > > > Hi
> > > >
> > > > Am 29.05.22 um 18:29 schrieb Rob Clark:
> > > > > From: Rob Clark <robdclark@...omium.org>
> > > > >
> > > > > If a GEM object is allocated, and then exported as a dma-buf fd which is
> > > > > mmap'd before or without the GEM buffer being directly mmap'd, the
> > > > > vma_node could be unitialized.  This leads to a situation where the CPU
> > > > > mapping is not correctly torn down in drm_vma_node_unmap().
> > > >
> > > > Which drivers are affected by this problem?
> > > >
> > > > I checked several drivers and most appear to be initializing the offset
> > > > during object construction, such as GEM SHMEM. [1] TTM-based drivers
> > > > also seem unaffected. [2]
> > > >
> > > >  From a quick grep, only etnaviv, msm and omapdrm appear to be affected?
> > > > They only seem to run drm_gem_create_mmap_offset() from their
> > > > ioctl-handling code.
> > > >
> > > > If so, I'd say it's preferable to fix these drivers and put a
> > > > drm_WARN_ONCE() into drm_gem_prime_mmap().
> > >
> > > That is good if fewer drivers are affected, however I disagree with
> > > your proposal.  At least for freedreno userspace, a lot of bo's never
> > > get mmap'd (either directly of via dmabuf), so we should not be
> > > allocating a mmap offset unnecessarily.
> >
> > Does this actually matter in the grand scheme of things? We originally
> > allocated mmap offset only on demand because userspace only had 32bit
> > loff_t support and so simply couldn't mmap anything if the offset
> > ended up above 32bit (even if there was still va space available).
> >
> > But those days are long gone (about 10 years or so) and the allocation
> > overhead for an mmap offset is tiny. So I think unless you can
> > benchmark an impact allocating it at bo alloc seems like the simplest
> > design overall, and hence what we should be doing. And if the vma
> > offset allocation every gets too slow due to fragmentation we can lift
> > the hole tree from i915 into drm_mm and the job should be done. At
> > that point we could also allocate the offset unconditionally in the
> > gem_init function and be done with it.
> >
> > Iow I concur with Thomas here, unless there's hard data contrary
> > simplicity imo trumps here.
>
> 32b userspace is still alive and well, at least on arm chromebooks ;-)

There's lots of different 32b userspace. The old thing was about
userspace which didn't use mmap64, but only mmap. Which could only
mmap the lower 4GB of a file, and so if you ended up with mmap_offset
above 4G then you'd blow up.

But mmap64 is a thing since forever, and if you compile with the right
glibc switch (loff_t is the magic thing iirc) it all works even with
default mmap. So I really don't think you should have this problem
anymore (except when cros is doing something really, really silly).
-Daniel

>
> BR,
> -R
>
> > -Daniel
> >
> > >
> > > BR,
> > > -R
> > >
> > > > Best regards
> > > > Thomas
> > > >
> > > > [1]
> > > > https://elixir.bootlin.com/linux/v5.18/source/drivers/gpu/drm/drm_gem_shmem_helper.c#L85
> > > > [2]
> > > > https://elixir.bootlin.com/linux/v5.18/source/drivers/gpu/drm/ttm/ttm_bo.c#L1002
> > > >
> > > > >
> > > > > Fixes: e5516553999f ("drm: call drm_gem_object_funcs.mmap with fake offset")
> > > > > Signed-off-by: Rob Clark <robdclark@...omium.org>
> > > > > ---
> > > > > Note, it's possible the issue existed in some related form prior to the
> > > > > commit tagged with Fixes.
> > > > >
> > > > >   drivers/gpu/drm/drm_prime.c | 5 +++++
> > > > >   1 file changed, 5 insertions(+)
> > > > >
> > > > > diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c
> > > > > index e3f09f18110c..849eea154dfc 100644
> > > > > --- a/drivers/gpu/drm/drm_prime.c
> > > > > +++ b/drivers/gpu/drm/drm_prime.c
> > > > > @@ -716,6 +716,11 @@ int drm_gem_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
> > > > >       struct file *fil;
> > > > >       int ret;
> > > > >
> > > > > +     /* Ensure that the vma_node is initialized: */
> > > > > +     ret = drm_gem_create_mmap_offset(obj);
> > > > > +     if (ret)
> > > > > +             return ret;
> > > > > +
> > > > >       /* Add the fake offset */
> > > > >       vma->vm_pgoff += drm_vma_node_start(&obj->vma_node);
> > > > >
> > > >
> > > > --
> > > > Thomas Zimmermann
> > > > Graphics Driver Developer
> > > > SUSE Software Solutions Germany GmbH
> > > > Maxfeldstr. 5, 90409 Nürnberg, Germany
> > > > (HRB 36809, AG Nürnberg)
> > > > Geschäftsführer: Ivo Totev
> >
> >
> >
> > --
> > Daniel Vetter
> > Software Engineer, Intel Corporation
> > http://blog.ffwll.ch



-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ