[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20170407212237.3uxfypsnuvnalc6e@phenom.ffwll.local>
Date: Fri, 7 Apr 2017 23:22:37 +0200
From: Daniel Vetter <daniel@...ll.ch>
To: Laura Abbott <labbott@...hat.com>
Cc: Chris Wilson <chris@...is-wilson.co.uk>,
Daniel Vetter <daniel.vetter@...ll.ch>,
Sean Paul <seanpaul@...omium.org>,
dri-devel@...ts.freedesktop.org, linux-kernel@...r.kernel.org,
Sumit Semwal <sumit.semwal@...aro.org>
Subject: Re: [RFC PATCH 2/2] drm/vgem: Enable dmabuf import interfaces
On Fri, Apr 07, 2017 at 10:58:47AM -0700, Laura Abbott wrote:
> On 04/07/2017 09:58 AM, Chris Wilson wrote:
> > On Fri, Apr 07, 2017 at 09:18:30AM -0700, Laura Abbott wrote:
> >> On 04/07/2017 12:39 AM, Chris Wilson wrote:
> >>> On Thu, Apr 06, 2017 at 04:18:33PM -0700, Laura Abbott wrote:
> >>>>
> >>>> Enable the GEM dma-buf import interfaces in addition to the export
> >>>> interfaces. This lets vgem be used as a test source for other allocators
> >>>> (e.g. Ion).
> >>>>
> >>>> +int vgem_gem_get_pages(struct drm_vgem_gem_object *obj)
> >>>> +{
> >>>> + struct page **pages;
> >>>> +
> >>>> + if (obj->pages)
> >>>> + return 0;
> >>>> +
> >>>> + pages = drm_gem_get_pages(&obj->base);
> >>>> + if (IS_ERR(pages)) {
> >>>> + return PTR_ERR(pages);
> >>>> + }
> >>>> +
> >>>> + obj->pages = pages;
> >>>
> >>> That's a significant loss in functionality (it requires the entire
> >>> object to fit into physical memory at the same time and requires a large
> >>> vmalloc for 32b systems), for what? vgem only has the ability to mmap
> >>> and export a fd -- both of which you already have if you have the dmabuf
> >>> fd. The only extra interface is the signaling, which does not yet have a
> >>> direct correspondence on dmabuf.
> >>>
> >>> (An obvious way to keep both would be to move the get_pages to importing
> >>> and then differentiate in the faulthandler where to get the page from.)
> >>>
> >>
> >> Thanks for pointing this out. I'll look to keep the existing behavior.
> >>
> >>> Can you provide more details on how you are using vgem to justify the
> >>> changes? I'm also not particularly happy in losing testing of a virtual
> >>> platform device from our CI.
> >>> -Chris
> >>>
> >>
> >> There exists a test module in the Ion directory
> >> (drivers/staging/android/ion/ion_test.c) today but it's not actually
> >> Ion specific. It registers a platform device and then provides an
> >> ioctl interface to perform a dma_buf attach and map. I proposed
> >> moving this to a generic dma-buf test module
> >> (https://marc.info/?l=dri-devel&m=148952187004611&w=2) and Daniel
> >> suggested that this was roughly what vgem was supposed to do.
> >
> > mmap(dma_buf_fd) gives you DMA_BUF_IOC_TEST_DMA_MAPPING, and that's the
> > only facility already available via vgem.
> >
>
> Not completely. It's possible to mmap a dma_buf fd without being
> attached to any device (the source of many caching headaches). Part of
> the goal is to verify that the attachment and map callbacks are
> working as expected.
>
> > DMA_BUF_IOC_TEST_KERNEL_MAPPING would be equivalent to
> > read/write(dma_buf_fd).
> >
> >> Adding the import methods for vgem covers all of what the
> >> Ion test was doing and we don't have to invent yet another ioctl
> >> interface and framework for attaching and mapping.
> >
> > I don't think it does :)
> >
> > To muddy the waters further, I've also done something similar:
> > https://cgit.freedesktop.org/drm-intel/tree/drivers/gpu/drm/i915/selftests/mock_dmabuf.c
> > https://cgit.freedesktop.org/drm-intel/tree/drivers/gpu/drm/i915/selftests/i915_gem_dmabuf.c
> >
>
> Those do look familiar :)
>
> > But this feels like a good enough reason for implementing read/write ops
> > for the dma-buf fd, and then we can test the dma_buf->ops->kmap on i915
> > as well, directly from i915.ko. Sounds fun, I'll see if I can cook
> > something up - and we can then see if that suits your testing as well.
> >
> >> Can you clarify about what you mean about 'virtual platform device'?
> >
> > vgem_device = drm_dev_alloc(&vgem_driver, NULL);
> >
> > helps exercise some more corner cases of the drm core, that we have
> > unwittingly broken in the past.
>
> Okay thanks for the explanation. I was debating putting the platform
> support behind a configuration option. I don't know if that would
> actually solve anything or just result in another configuration that
> doesn't get tested.
We could do both still, the drm core doesn't care about the device, it's
only used to place it in the right directoy in sysfs (well, minus the bugs
Chris mentioned). So having a NULL device for drm_dev_alloc while still
storing that fake platform device somewhere should be all fine (if we need
that fake platform dev for dma-buf testing).
-Daniel
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
Powered by blists - more mailing lists