lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 8 Aug 2019 09:32:21 -0700
From:   Rob Clark <robdclark@...omium.org>
To:     Christoph Hellwig <hch@....de>
Cc:     Rob Clark <robdclark@...il.com>,
        dri-devel <dri-devel@...ts.freedesktop.org>,
        Catalin Marinas <catalin.marinas@....com>,
        Will Deacon <will@...nel.org>,
        Maarten Lankhorst <maarten.lankhorst@...ux.intel.com>,
        Maxime Ripard <maxime.ripard@...tlin.com>,
        Sean Paul <sean@...rly.run>, David Airlie <airlied@...ux.ie>,
        Daniel Vetter <daniel@...ll.ch>,
        Allison Randal <allison@...utok.net>,
        Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
        Thomas Gleixner <tglx@...utronix.de>,
        linux-arm-kernel@...ts.infradead.org,
        LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 1/2] drm: add cache support for arm64

On Thu, Aug 8, 2019 at 3:00 AM Christoph Hellwig <hch@....de> wrote:
>
> On Wed, Aug 07, 2019 at 09:09:53AM -0700, Rob Clark wrote:
> > > > (Eventually I'd like to support pages passed in from userspace.. but
> > > > that is down the road.)
> > >
> > > Eww.  Please talk to the iommu list before starting on that.
> >
> > This is more of a long term goal, we can't do it until we have
> > per-context/process pagetables, ofc.
> >
> > Getting a bit off topic, but I'm curious about what problems you are
> > concerned about.  Userspace can shoot it's own foot, but if it is not
> > sharing GPU pagetables with other processes, it can't shoot other's
> > feet.  (I'm guessing you are concerned about non-page-aligned
> > mappings?)
>
> Maybe I misunderstood what you mean above, I though you mean messing
> with page cachability attributes for userspace pages.  If what you are
> looking into is just "standard" SVM I only hope that our APIs for that
> which currently are a mess are in shape by then, as all users currently
> have their own crufty and at least slightly buggy versions of that.  But
> at least it is an issue that is being worked on.

ahh, ok.. and no, we wouldn't be remapping 'malloc' memory as
writecombine.  We'd have to wire up better support for cached buffers.

Currently we use WC for basically all GEM buffers allocated from
kernel, since that is a good choice for most GPU workloads.. ie. CPU
isn't reading back from GPU buffers in most cases.  I'm told cached
buffers are useful for compute workloads where there is more back and
forth between GPU and CPU, but we haven't really crossed that bridge
yet.  Compute workloads is also were the SVM interest is.

> > > So back to the question, I'd like to understand your use case (and
> > > maybe hear from the other drm folks if that is common):
> > >
> > >  - you allocate pages from shmem (why shmem, btw?  if this is done by
> > >    other drm drivers how do they guarantee addressability without an
> > >    iommu?)
> >
> > shmem for swappable pages.  I don't unpin and let things get swapped
> > out yet, but I'm told it starts to become important when you have 50
> > browser tabs open ;-)
>
> Yes,  but at that point the swapping can use the kernel linear mapping
> and we are going into aliasing problems that can disturb the cache.  So
> as-is this is going to problematic without new hooks into shmemfs.
>

My expectation is that we'd treat the pages as cached when handing
them back to shmemfs, and wb+inv when we take them back again from
shmemfs and re-pin.  I think this works out to be basically the same
scenario as having to wb+inv when we first get the pages from shmemfs.

> > >  - then the memory is either mapped to userspace or vmapped (or even
> > >    both, althrough the lack of aliasing you mentioned would speak
> > >    against it) as writecombine (aka arm v6+ normal uncached).  Does
> > >    the mapping live on until the memory is freed?
> >
> > (side note, *most* of the drm/msm supported devices are armv8, the
> > exceptions are 8060 and 8064 which are armv7.. I don't think drm/msm
> > will ever have to deal w/ armv6)
>
> Well, the point was that starting from v6 the kernels dma uncached
> really is write combine.  So that applied to v7 and v8 as well.

ahh, gotcha

BR,
-R

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ