lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20130204145111.GB5843@phenom.ffwll.local>
Date:	Mon, 4 Feb 2013 15:51:11 +0100
From:	Daniel Vetter <daniel@...ll.ch>
To:	Maarten Lankhorst <maarten.lankhorst@...onical.com>
Cc:	Inki Dae <inki.dae@...sung.com>, linaro-mm-sig@...ts.linaro.org,
	linux-kernel@...r.kernel.org, dri-devel@...ts.freedesktop.org,
	linux-media@...r.kernel.org
Subject: Re: [Linaro-mm-sig] [PATCH 6/7] reservation: cross-device
 reservation support

On Mon, Feb 04, 2013 at 10:57:22AM +0100, Maarten Lankhorst wrote:
> Op 04-02-13 08:06, Inki Dae schreef:
> >> +/**
> >> + * ticket_commit - commit a reservation with a new fence
> >> + * @ticket:    [in]    the reservation_ticket returned by
> >> + * ticket_reserve
> >> + * @entries:   [in]    a linked list of struct reservation_entry
> >> + * @fence:     [in]    the fence that indicates completion
> >> + *
> >> + * This function will call reservation_ticket_fini, no need
> >> + * to do it manually.
> >> + *
> >> + * This function should be called after a hardware command submission is
> >> + * completed succesfully. The fence is used to indicate completion of
> >> + * those commands.
> >> + */
> >> +void
> >> +ticket_commit(struct reservation_ticket *ticket,
> >> +                 struct list_head *entries, struct fence *fence)
> >> +{
> >> +       struct list_head *cur;
> >> +
> >> +       if (list_empty(entries))
> >> +               return;
> >> +
> >> +       if (WARN_ON(!fence)) {
> >> +               ticket_backoff(ticket, entries);
> >> +               return;
> >> +       }
> >> +
> >> +       list_for_each(cur, entries) {
> >> +               struct reservation_object *bo;
> >> +               bool shared;
> >> +
> >> +               reservation_entry_get(cur, &bo, &shared);
> >> +
> >> +               if (!shared) {
> >> +                       int i;
> >> +                       for (i = 0; i < bo->fence_shared_count; ++i) {
> >> +                               fence_put(bo->fence_shared[i]);
> >> +                               bo->fence_shared[i] = NULL;
> >> +                       }
> >> +                       bo->fence_shared_count = 0;
> >> +                       if (bo->fence_excl)
> >> +                               fence_put(bo->fence_excl);
> >> +
> >> +                       bo->fence_excl = fence;
> >> +               } else {
> >> +                       if (WARN_ON(bo->fence_shared_count >=
> >> +                                   ARRAY_SIZE(bo->fence_shared))) {
> >> +                               mutex_unreserve_unlock(&bo->lock);
> >> +                               continue;
> >> +                       }
> >> +
> >> +                       bo->fence_shared[bo->fence_shared_count++] = fence;
> >> +               }
> > Hi,
> >
> > I got some questions to fence_excl and fence_shared. At the above
> > code, if bo->fence_excl is not NULL then it puts bo->fence_excl and
> > sets a new fence to it. This seems like that someone that committed a
> > new fence, wants to access the given dmabuf exclusively even if
> > someone is accessing the given dmabuf.
> Yes, if there were shared fences, they had to issue a wait for the previous exclusive fence, so if you add
> (possibly hardware) wait ops on those shared fences to your command stream, it follows that you also
> waited for the previous exclusive fence implicitly.
> 
> If there is only an exclusive fence, you have to issue some explicit wait op  on it before you start the rest
> of the commands.
> > On the other hand, in case of fence_shared, someone wants to access
> > that dmabuf non-exclusively. So this case seems like that the given
> > dmabuf could be accessed by two more devices. So I guess that the
> > fence_excl could be used for write access(may need buffer sync like
> > blocking) and read access for the fence_shared(may not need buffer
> > sync). I'm not sure that I understand these two things correctly so
> > could you please give me more comments for them?
> Correct, if you create a shared fence, you still have to emit a wait op for the exclusive fence.

Just a quick note: We limit the number of non-exclusive fences to avoid
reallocating memory, which would be a bit a pain. Otoh if we support some
form of fence timeline concept, the fence core code could overwrite
redundant fences. Which would reasonable limit the total attached fence
count. Still we'd need to thread potential -ENOMEM failures through all
callers.

Do you see a use-case for more than just a few non-exclusive fences?
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ