[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAF6AEGvP1+7BKo7+oCj4XBBw32NPjrH5EAZuodu2zb8oiyVP_Q@mail.gmail.com>
Date: Fri, 13 Jul 2012 13:52:00 -0500
From: Rob Clark <rob.clark@...aro.org>
To: Tom Cooksey <tom.cooksey@....com>
Cc: dri-devel@...ts.freedesktop.org, linux-media@...r.kernel.org,
linaro-mm-sig@...ts.linaro.org, patches@...aro.org,
daniel.vetter@...ll.ch, linux-kernel@...r.kernel.org,
maarten.lankhorst@...onical.com, sumit.semwal@...aro.org
Subject: Re: [RFC] dma-fence: dma-buf synchronization (v2)
On Fri, Jul 13, 2012 at 12:35 PM, Tom Cooksey <tom.cooksey@....com> wrote:
> My other thought is around atomicity. Could this be extended to
> (safely) allow for hardware devices which might want to access
> multiple buffers simultaneously? I think it probably can with
> some tweaks to the interface? An atomic function which does
> something like "give me all the fences for all these buffers
> and add this fence to each instead/as-well-as"?
fwiw, what I'm leaning towards right now is combining dma-fence w/
Maarten's idea of dma-buf-mgr (not sure if you saw his patches?). And
let dmabufmgr handle the multi-buffer reservation stuff. And possibly
the read vs write access, although this I'm not 100% sure on... the
other option being the concept of read vs write (or
exclusive/non-exclusive) fences.
In the current state, the fence is quite simple, and doesn't care
*what* it is fencing, which seems advantageous when you get into
trying to deal with combinations of devices sharing buffers, some of
whom can do hw sync, and some who can't. So having a bit of
partitioning from the code dealing w/ sequencing who can access the
buffers when and for what purpose seems like it might not be a bad
idea. Although I'm still working through the different alternatives.
BR,
-R
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists