lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 10 Apr 2014 12:07:31 +0200
From:	Maarten Lankhorst <maarten.lankhorst@...onical.com>
To:	Thomas Hellstrom <thellstrom@...are.com>
CC:	linux-kernel@...r.kernel.org, linux-arch@...r.kernel.org,
	dri-devel@...ts.freedesktop.org, linaro-mm-sig@...ts.linaro.org,
	ccross@...gle.com, linux-media@...r.kernel.org
Subject: Re: [PATCH 2/2] [RFC] reservation: add suppport for read-only access
 using rcu

Hey,

op 10-04-14 10:46, Thomas Hellstrom schreef:
> Hi!
>
> Ugh. This became more complicated than I thought, but I'm OK with moving
> TTM over to fence while we sort out
> how / if we're going to use this.
>
> While reviewing, it struck me that this is kind of error-prone, and hard
> to follow since we're operating on a structure that may be
> continually updated under us, needing a lot of RCU-specific macros and
> barriers.
Yeah, but with the exception of dma_buf_poll I don't think there is anything else
outside drivers/base/reservation.c has to deal with rcu.

> Also the rcu wait appears to not complete until there are no busy fences
> left (new ones can be added while we wait) rather than
> waiting on a snapshot of busy fences.
This has been by design, because 'wait for bo idle' type of functions only care
if the bo is completely idle or not.

It would be easy to make a snapshot even without seqlocks, just copy
reservation_object_test_signaled_rcu to return a shared list if test_all is set, or return pointer to exclusive otherwise.

> I wonder if these issues can be addressed by having a function that
> provides a snapshot of all busy fences: This can be accomplished
> either by including the exclusive fence in the fence_list structure and
> allocate a new such structure each time it is updated. The RCU reader
> could then just make a copy of the current fence_list structure pointed
> to by &obj->fence, but I'm not sure we want to reallocate *each* time we
> update the fence pointer.
No, the most common operation is updating fence pointers, which is why
the current design makes that cheap. It's also why doing rcu reads is more expensive.
> The other approach uses a seqlock to obtain a consistent snapshot, and
> I've attached an incomplete outline, and I'm not 100% whether it's OK to
> combine RCU and seqlocks in this way...
>
> Both these approaches have the benefit of hiding the RCU snapshotting in
> a single function, that can then be used by any waiting
> or polling function.
>

I think the middle way with using seqlocks to protect the fence_excl pointer and shared list combination,
and using RCU to protect the refcounts for fences and the availability of the list could work for our usecase
and might remove a bunch of memory barriers. But yeah that depends on layering rcu and seqlocks.
No idea if that is allowed. But I suppose it is.

Also, you're being overly paranoid with seqlock reading, we would only need something like this:

rcu_read_lock()
     preempt_disable()
     seq = read_seqcount_begin();
     read fence_excl, shared_count = ACCESS_ONCE(fence->shared_count)
     copy shared to a struct.
     if (read_seqcount_retry()) { unlock and retry }
   preempt_enable();
   use fence_get_rcu() to bump refcount on everything, if that fails unlock, put, and retry
rcu_read_unlock()

But the shared list would still need to be RCU'd, to make sure we're not reading freed garbage.

~Maarten

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ