lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 22 Jul 2019 21:09:25 -0700
From:   John Stultz <john.stultz@...aro.org>
To:     Christoph Hellwig <hch@...radead.org>
Cc:     lkml <linux-kernel@...r.kernel.org>,
        Laura Abbott <labbott@...hat.com>,
        Benjamin Gaignard <benjamin.gaignard@...aro.org>,
        Sumit Semwal <sumit.semwal@...aro.org>,
        Liam Mark <lmark@...eaurora.org>,
        Pratik Patel <pratikp@...eaurora.org>,
        Brian Starkey <Brian.Starkey@....com>,
        Vincent Donnefort <Vincent.Donnefort@....com>,
        Sudipto Paul <Sudipto.Paul@....com>,
        "Andrew F . Davis" <afd@...com>,
        Xu YiPing <xuyiping@...ilicon.com>,
        "Chenfeng (puck)" <puck.chen@...ilicon.com>,
        butao <butao@...ilicon.com>,
        "Xiaqing (A)" <saberlily.xia@...ilicon.com>,
        Yudongbin <yudongbin@...ilicon.com>,
        Chenbo Feng <fengc@...gle.com>,
        Alistair Strachan <astrachan@...gle.com>,
        dri-devel <dri-devel@...ts.freedesktop.org>,
        Hridya Valsaraju <hridya@...gle.com>,
        Rob Clark <robdclark@...il.com>
Subject: Re: [PATCH v6 2/5] dma-buf: heaps: Add heap helpers

On Thu, Jul 18, 2019 at 3:06 AM Christoph Hellwig <hch@...radead.org> wrote:
>
> > +void INIT_HEAP_HELPER_BUFFER(struct heap_helper_buffer *buffer,
> > +                          void (*free)(struct heap_helper_buffer *))
>
> Please use a lower case naming following the naming scheme for the
> rest of the file.

Yes! Apologies as this was a hold-over from when the initialization
function was an inline function in the style of
INIT_WORK/INIT_LIST_HEAD. No longer appropriate that its a function.
I'll change it.

> > +static void *dma_heap_map_kernel(struct heap_helper_buffer *buffer)
> > +{
> > +     void *vaddr;
> > +
> > +     vaddr = vmap(buffer->pages, buffer->pagecount, VM_MAP, PAGE_KERNEL);
> > +     if (!vaddr)
> > +             return ERR_PTR(-ENOMEM);
> > +
> > +     return vaddr;
> > +}
>
> Unless I'm misreading the patches this is used for the same pages that
> also might be dma mapped.  In this case you need to use
> flush_kernel_vmap_range and invalidate_kernel_vmap_range in the right
> places to ensure coherency between the vmap and device view.  Please
> also document the buffer ownership, as this really can get complicated.

Forgive me I wasn't familiar with those calls, but this seems like it
would apply to all dma-buf exporters if so, and I don't see any
similar flush_kernel_vmap_range calls there (both functions are
seemingly only used by xfs, md and bio).

We do have the dma_heap_dma_buf_begin_cpu_access()/dma_heap_dma_buf_end_cpu_access()
hooks (see more on these below) which sync the buffers for each
attachment (via dma_sync_sg_for_cpu/device), and are used around cpu
access to the buffers. Are you suggesting the
flush_kernel_vmap_range() call be added to those hooks or is the
dma_sync_sg_for_* calls sufficient there?

> > +static vm_fault_t dma_heap_vm_fault(struct vm_fault *vmf)
> > +{
> > +     struct vm_area_struct *vma = vmf->vma;
> > +     struct heap_helper_buffer *buffer = vma->vm_private_data;
> > +
> > +     vmf->page = buffer->pages[vmf->pgoff];
> > +     get_page(vmf->page);
> > +
> > +     return 0;
> > +}
>
> Is there any exlusion between mmap / vmap and the device accessing
> the data?  Without that you are going to run into a lot of coherency
> problems.

This has actually been a concern of mine recently, but at the higher
dma-buf core level.  Conceptually, there is the
dma_buf_map_attachment() and dma_buf_unmap_attachment() calls drivers
use to map the buffer to an attached device, and there are the
dma_buf_begin_cpu_access() and dma_buf_end_cpu_access() calls which
are to be done when touching the cpu mapped pages.  These look like
serializing functions, but actually don't seem to have any enforcement
mechanism to exclude parallel access.

To me it seems like adding the exclusion between those operations
should be done at the dmabuf core level, and would actually be helpful
for optimizing some of the cache maintenance rules w/ dmabuf.  Does
this sound like something closer to what your suggesting, or am I
misunderstanding your point?

Again, I really appreciate the review and feedback!

Thanks so much!
-john

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ