lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 31 Jan 2013 15:49:09 +0100
From:	Daniel Vetter <daniel.vetter@...ll.ch>
To:	Inki Dae <inki.dae@...sung.com>
Cc:	Maarten Lankhorst <m.b.lankhorst@...il.com>,
	linaro-mm-sig@...ts.linaro.org, linux-kernel@...r.kernel.org,
	dri-devel@...ts.freedesktop.org, linux-media@...r.kernel.org
Subject: Re: [Linaro-mm-sig] [PATCH 4/7] fence: dma-buf cross-device
 synchronization (v11)

On Thu, Jan 31, 2013 at 3:38 PM, Inki Dae <inki.dae@...sung.com> wrote:
> I think I understand as your comment but I don't think that I
> understand fully the dma-fence mechanism. So I wish you to give me
> some advices for it. In our case, I'm applying the dma-fence to
> mali(3d gpu) driver as producer and exynos drm(display controller)
> driver as consumer.
>
> And the sequence is as the following:
> In case of producer,
> 1. call fence_wait to wait for the dma access completion of others.
> 2. And then the producer creates a fence and a new reservation entry.
> 3. And then it sets the given dmabuf's resv(reservation_object) to the
> new reservation entry.
> 4. And then it adds the reservation entry to entries list.
> 5. And then it sets the fence to all dmabufs of the entries list.
> Actually, this work is to set the fence to the reservaion_object of
> each dmabuf.
> 6. And then the producer's dma start.
> 7. Finally, when the dma start is completed, we get the entries list
> from a 3d job command(in case of mali core, pp job) and call
> fence_signal() with each fence of each reservation entry.
>
> From here, is there my missing point?

Yeah, more or less. Although you need to wrap everything into ticket
reservation locking so that you can atomically update fences if you
have support for some form of device2device singalling (i.e. without
blocking on the cpu until all the old users completed). At least
that's the main point of Maarten's patches (and this does work with
prime between a few drivers by now), but ofc you can use cpu blocking
as a fallback.

> And I thought the fence from reservation entry at step 7 means that
> the producer wouldn't access the dmabuf attaching this fence anymore
> so this step wakes up all processes blocked. So I understood that the
> fence means a owner accessing the given dmabuf and we could aware of
> whether the owner commited its own fence to the given dmabuf to read
> or write through the fence's flags.

The fence doesn't give ownership of the dma_buf object, but only
indicates when the dma access will have completed. The relationship
between dma_buf/reservation and the attached fences specify whether
other hw engines can access the dma_buf, too (if the fence is
non-exclusive).

> If you give me some advices, I'd be happy.

Rob and Maarten are working on some howtos and documentation with
example code, I guess it'd be best to wait a bit until we have that.
Or just review the existing stuff Rob just posted and reply with
questions there.

Cheers, Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ