[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200717205340.GR7625@magnolia>
Date: Fri, 17 Jul 2020 13:53:40 -0700 (PDT)
From: "Darrick J. Wong" <darrick.wong@...cle.com>
To: Eric Biggers <ebiggers@...nel.org>
Cc: linux-kernel@...r.kernel.org, linux-arch@...r.kernel.org,
"Paul E . McKenney" <paulmck@...nel.org>,
linux-fsdevel@...r.kernel.org, Akira Yokosawa <akiyks@...il.com>,
Alan Stern <stern@...land.harvard.edu>,
Andrea Parri <parri.andrea@...il.com>,
Boqun Feng <boqun.feng@...il.com>,
Daniel Lustig <dlustig@...dia.com>,
Dave Chinner <david@...morbit.com>,
David Howells <dhowells@...hat.com>,
Jade Alglave <j.alglave@....ac.uk>,
Luc Maranget <luc.maranget@...ia.fr>,
Nicholas Piggin <npiggin@...il.com>,
Peter Zijlstra <peterz@...radead.org>,
Will Deacon <will@...nel.org>
Subject: Re: [PATCH] tools/memory-model: document the "one-time init" pattern
On Thu, Jul 16, 2020 at 09:44:27PM -0700, Eric Biggers wrote:
> From: Eric Biggers <ebiggers@...gle.com>
>
> The "one-time init" pattern is implemented incorrectly in various places
> in the kernel. And when people do try to implement it correctly, it is
> unclear what to use. Try to give some proper guidance.
>
> This is motivated by the discussion at
> https://lkml.kernel.org/linux-fsdevel/20200713033330.205104-1-ebiggers@kernel.org/T/#u
> regarding fixing the initialization of super_block::s_dio_done_wq.
>
> Cc: Akira Yokosawa <akiyks@...il.com>
> Cc: Alan Stern <stern@...land.harvard.edu>
> Cc: Andrea Parri <parri.andrea@...il.com>
> Cc: Boqun Feng <boqun.feng@...il.com>
> Cc: Daniel Lustig <dlustig@...dia.com>
> Cc: Darrick J. Wong <darrick.wong@...cle.com>
> Cc: Dave Chinner <david@...morbit.com>
> Cc: David Howells <dhowells@...hat.com>
> Cc: Jade Alglave <j.alglave@....ac.uk>
> Cc: Luc Maranget <luc.maranget@...ia.fr>
> Cc: Nicholas Piggin <npiggin@...il.com>
> Cc: Paul E. McKenney <paulmck@...nel.org>
> Cc: Peter Zijlstra <peterz@...radead.org>
> Cc: Will Deacon <will@...nel.org>
> Signed-off-by: Eric Biggers <ebiggers@...gle.com>
> ---
> tools/memory-model/Documentation/recipes.txt | 151 +++++++++++++++++++
> 1 file changed, 151 insertions(+)
>
> diff --git a/tools/memory-model/Documentation/recipes.txt b/tools/memory-model/Documentation/recipes.txt
> index 7fe8d7aa3029..04beb06dbfc7 100644
> --- a/tools/memory-model/Documentation/recipes.txt
> +++ b/tools/memory-model/Documentation/recipes.txt
> @@ -519,6 +519,157 @@ CPU1 puts the waiting task to sleep and CPU0 fails to wake it up.
>
> Note that use of locking can greatly simplify this pattern.
>
> +One-time init
> +-------------
> +
> +The "one-time init" pattern is when multiple tasks can race to
> +initialize the same data structure(s) on first use.
> +
> +In many cases, it's best to just avoid the need for this by simply
> +initializing the data ahead of time.
> +
> +But in cases where the data would often go unused, one-time init can be
> +appropriate to avoid wasting kernel resources. It can also be
> +appropriate if the initialization has other prerequisites which preclude
> +it being done ahead of time.
> +
> +First, consider if your data has (a) global or static scope, (b) can be
> +initialized from atomic context, and (c) cannot fail to be initialized.
> +If all of those apply, just use DO_ONCE() from <linux/once.h>:
> +
> + DO_ONCE(func);
> +
> +If that doesn't apply, you'll have to implement one-time init yourself.
> +
> +The simplest implementation just uses a mutex and an 'inited' flag.
> +This implementation should be used where feasible:
> +
> + static bool foo_inited;
> + static DEFINE_MUTEX(foo_init_mutex);
> +
> + int init_foo_if_needed(void)
> + {
> + int err = 0;
> +
> + mutex_lock(&foo_init_mutex);
> + if (!foo_inited) {
> + err = init_foo();
> + if (err == 0)
> + foo_inited = true;
> + }
> + mutex_unlock(&foo_init_mutex);
> + return err;
> + }
> +
> +The above example uses static variables, but this solution also works
> +for initializing something that is part of another data structure. The
> +mutex may still be static.
> +
> +In where cases where taking the mutex in the "already initialized" case
> +presents scalability concerns, the implementation can be optimized to
> +check the 'inited' flag outside the mutex. Unfortunately, this
> +optimization is often implemented incorrectly by using a plain load.
> +That violates the memory model and may result in unpredictable behavior.
> +
> +A correct implementation is:
> +
> + static bool foo_inited;
> + static DEFINE_MUTEX(foo_init_mutex);
> +
> + int init_foo_if_needed(void)
> + {
> + int err = 0;
> +
> + /* pairs with smp_store_release() below */
> + if (smp_load_acquire(&foo_inited))
> + return 0;
> +
> + mutex_lock(&foo_init_mutex);
> + if (!foo_inited) {
> + err = init_foo();
> + if (err == 0) /* pairs with smp_load_acquire() above */
> + smp_store_release(&foo_inited, true);
> + }
> + mutex_unlock(&foo_init_mutex);
> + return err;
> + }
> +
> +If only a single data structure is being initialized, then the pointer
> +itself can take the place of the 'inited' flag:
> +
> + static struct foo *foo;
> + static DEFINE_MUTEX(foo_init_mutex);
> +
> + int init_foo_if_needed(void)
> + {
> + int err = 0;
> +
> + /* pairs with smp_store_release() below */
> + if (smp_load_acquire(&foo))
> + return 0;
> +
> + mutex_lock(&foo_init_mutex);
> + if (!foo) {
> + struct foo *p = alloc_foo();
> +
> + if (p) /* pairs with smp_load_acquire() above */
> + smp_store_release(&foo, p);
> + else
> + err = -ENOMEM;
> + }
> + mutex_unlock(&foo_init_mutex);
> + return err;
> + }
> +
> +There are also cases in which the smp_load_acquire() can be replaced by
> +the more lightweight READ_ONCE(). (smp_store_release() is still
> +required.) Specifically, if all initialized memory is transitively
> +reachable from the pointer itself, then there is no control dependency
I don't quite understand what "transitively reachable from the pointer
itself" means? Does that describe the situation where all the objects
reachable through the object that the global struct foo pointer points
at are /only/ reachable via that global pointer?
Aside from that question, I very much like having this recipe in the
kernel documentation. Thank you for putting this together!
--D
> +so the data dependency barrier provided by READ_ONCE() is sufficient.
> +
> +However, using the READ_ONCE() optimization is discouraged for
> +nontrivial data structures, as it can be difficult to determine if there
> +is a control dependency. For complex data structures it may depend on
> +internal implementation details of other kernel subsystems.
> +
> +For the single-pointer case, a further optimized implementation
> +eliminates the mutex and instead uses compare-and-exchange:
> +
> + static struct foo *foo;
> +
> + int init_foo_if_needed(void)
> + {
> + struct foo *p;
> +
> + /* pairs with successful cmpxchg_release() below */
> + if (smp_load_acquire(&foo))
> + return 0;
> +
> + p = alloc_foo();
> + if (!p)
> + return -ENOMEM;
> +
> + /* on success, pairs with smp_load_acquire() above and below */
> + if (cmpxchg_release(&foo, NULL, p) != NULL) {
> + free_foo(p);
> + /* pairs with successful cmpxchg_release() above */
> + smp_load_acquire(&foo);
> + }
> + return 0;
> + }
> +
> +Note that when the cmpxchg_release() fails due to another task already
> +having done it, a second smp_load_acquire() is required, since we still
> +need to acquire the data that the other task released. You may be
> +tempted to upgrade cmpxchg_release() to cmpxchg() with the goal of it
> +acting as both ACQUIRE and RELEASE, but that doesn't work here because
> +cmpxchg() only guarantees memory ordering if it succeeds.
> +
> +Because of the above subtlety, the version with the mutex instead of
> +cmpxchg_release() should be preferred, except potentially in cases where
> +it is difficult to provide anything other than a global mutex and where
> +the one-time data is part of a frequently allocated structure. In that
> +case, a global mutex might present scalability concerns.
>
> Rules of thumb
> ==============
> --
> 2.27.0
>
Powered by blists - more mailing lists