lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAPaKu7R4VFYnk9UdpguZnkWeKk2ELVnoQ60=i72RN2GkME1ukw@mail.gmail.com>
Date:   Wed, 26 Feb 2020 16:02:50 -0800
From:   Chia-I Wu <olvaffe@...il.com>
To:     Thomas Hellström (VMware) 
        <thomas_os@...pmail.org>
Cc:     Gerd Hoffmann <kraxel@...hat.com>,
        ML dri-devel <dri-devel@...ts.freedesktop.org>,
        Guillaume Gardet <Guillaume.Gardet@....com>,
        David Airlie <airlied@...ux.ie>,
        open list <linux-kernel@...r.kernel.org>,
        stable@...r.kernel.org,
        Gurchetan Singh <gurchetansingh@...omium.org>,
        tzimmermann@...e.de
Subject: Re: [PATCH v5 1/3] drm/shmem: add support for per object caching flags.

On Wed, Feb 26, 2020 at 10:25 AM Thomas Hellström (VMware)
<thomas_os@...pmail.org> wrote:
>
> Hi, Gerd,
>
> While looking at this patchset I came across some stuff that seems
> strange but that was merged in a previous patchset.
>
> (please refer to
> https://lists.freedesktop.org/archives/dri-devel/2018-September/190001.html.
> Forgive me if I've missed any discussion leading up to this).
>
>
> On 2/26/20 4:47 PM, Gerd Hoffmann wrote:
> > Add map_cached bool to drm_gem_shmem_object, to request cached mappings
> > on a per-object base.  Check the flag before adding writecombine to
> > pgprot bits.
> >
> > Cc: stable@...r.kernel.org
> > Signed-off-by: Gerd Hoffmann <kraxel@...hat.com>
> > ---
> >   include/drm/drm_gem_shmem_helper.h     |  5 +++++
> >   drivers/gpu/drm/drm_gem_shmem_helper.c | 15 +++++++++++----
> >   2 files changed, 16 insertions(+), 4 deletions(-)
> >
> > diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h
> > index e34a7b7f848a..294b2931c4cc 100644
> > --- a/include/drm/drm_gem_shmem_helper.h
> > +++ b/include/drm/drm_gem_shmem_helper.h
> > @@ -96,6 +96,11 @@ struct drm_gem_shmem_object {
> >        * The address are un-mapped when the count reaches zero.
> >        */
> >       unsigned int vmap_use_count;
> > +
> > +     /**
> > +      * @map_cached: map object cached (instead of using writecombine).
> > +      */
> > +     bool map_cached;
> >   };
> >
> >   #define to_drm_gem_shmem_obj(obj) \
> > diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
> > index a421a2eed48a..aad9324dcf4f 100644
> > --- a/drivers/gpu/drm/drm_gem_shmem_helper.c
> > +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
> > @@ -254,11 +254,16 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
> >       if (ret)
> >               goto err_zero_use;
> >
> > -     if (obj->import_attach)
> > +     if (obj->import_attach) {
> >               shmem->vaddr = dma_buf_vmap(obj->import_attach->dmabuf);
> > -     else
> > +     } else {
> > +             pgprot_t prot = PAGE_KERNEL;
> > +
> > +             if (!shmem->map_cached)
> > +                     prot = pgprot_writecombine(prot);
> >               shmem->vaddr = vmap(shmem->pages, obj->size >> PAGE_SHIFT,
> > -                                 VM_MAP, pgprot_writecombine(PAGE_KERNEL));
> > +                                 VM_MAP, prot)
>
>
> Wouldn't a vmap with pgprot_writecombine() create conflicting mappings
> with the linear kernel map which is not write-combined? Or do you change
> the linear kernel map of the shmem pages somewhere? vmap bypassess at
> least the x86 PAT core mapping consistency check and this could
> potentially cause spuriously overwritten memory.

Yeah, I think this creates a conflicting alias.  It seems a call to
set_pages_array_wc here or changes elsewhere is needed..

But this is a pre-existing issue in the shmem helper.  There is also
no universal fix (e.g., set_pages_array_wc is x86 only)?  I would hope
this series can be merged sooner to fix the regression first.

>
>
> > +     }
> >
> >       if (!shmem->vaddr) {
> >               DRM_DEBUG_KMS("Failed to vmap pages\n");
> > @@ -540,7 +545,9 @@ int drm_gem_shmem_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
> >       }
> >
> >       vma->vm_flags |= VM_MIXEDMAP | VM_DONTEXPAND;
> > -     vma->vm_page_prot = pgprot_writecombine(vm_get_page_prot(vma->vm_flags));
> > +     vma->vm_page_prot = vm_get_page_prot(vma->vm_flags);
> > +     if (!shmem->map_cached)
> > +             vma->vm_page_prot = pgprot_writecombine(vma->vm_page_prot);
>
> Same thing here. Note that vmf_insert_page() which is used by the fault
> handler also bypasses the x86 PAT  consistency check, whereas
> vmf_insert_mixed() doesn't.
>
> >       vma->vm_page_prot = pgprot_decrypted(vma->vm_page_prot);
>
> At least with SME or SEV encryption, where shmem memory has its kernel
> map set to encrypted, creating conflicting mappings is explicitly
> disallowed.
> BTW, why is mmap mapping decrypted while vmap isn't?
>
> >       vma->vm_ops = &drm_gem_shmem_vm_ops;
> >
>
> Thanks,
> Thomas
>
>
>
> _______________________________________________
> dri-devel mailing list
> dri-devel@...ts.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ