lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 21 Aug 2019 11:10:08 -0700
From:   Stephen Boyd <sboyd@...nel.org>
To:     Bjorn Andersson <bjorn.andersson@...aro.org>,
        Michael Turquette <mturquette@...libre.com>
Cc:     linux-clk@...r.kernel.org, linux-kernel@...r.kernel.org,
        linux-arm-msm@...r.kernel.org, Rob Clark <robclark@...il.com>,
        Sean Paul <seanpaul@...omium.org>
Subject: Re: [RFC] clk: Remove cached cores in parent map during unregister

Quoting Stephen Boyd (2019-07-29 15:46:51)
> Quoting Bjorn Andersson (2019-07-22 22:14:46)
> > As clocks are registered their parents are resolved and the parent_map
> > is updated to cache the clk_core objects of each existing parent.
> > But in the event of a clock being unregistered this cache will carry
> > dangling pointers if not invalidated, so do this for all children of the
> > clock being unregistered.
> > 
> > Signed-off-by: Bjorn Andersson <bjorn.andersson@...aro.org>
> > ---
> > 
> > This resolves the issue seen where the DSI PLL (and it's provided clocks) is
> > being registered and unregistered multiple times due to probe deferral.
> > 
> > Marking it RFC because I don't fully understand the life of the clock yet.
> 
> The concept sounds sane but the implementation is going to be not much
> fun. The problem is that a clk can be in many different parent caches,
> even ones for clks that aren't currently parented to it. We would need
> to walk the entire tree(s) and find anywhere that we've cached the
> clk_core pointer and invalidate it. Maybe we can speed that up a little
> bit by keeping a reference to the entry of each parent cache that is for
> the parent we're removing, essentially holding an inverse cache, but I'm
> not sure it will provide any benefit besides wasting space for this one
> operation that we shouldn't be doing very often if at all.
> 
> It certainly sounds easier to iterate through the whole tree and just
> invalidate entries in all the caches under the prepare lock. We can
> optimize it later.

Here's an attempt at the simple approach. There's another problem where
the cached 'hw' member of the parent data is held around when we don't
know when the caller has destroyed it. Not much else we can do for that
though.

---8<---
diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c
index c0990703ce54..f42a803fb11a 100644
--- a/drivers/clk/clk.c
+++ b/drivers/clk/clk.c
@@ -3737,6 +3737,37 @@ static const struct clk_ops clk_nodrv_ops = {
 	.set_parent	= clk_nodrv_set_parent,
 };
 
+static void clk_core_evict_parent_cache_subtree(struct clk_core *root,
+						struct clk_core *target)
+{
+	int i;
+	struct clk_core *child;
+
+	if (!root)
+		return;
+
+	for (i = 0; i < root->num_parents; i++)
+		if (root->parents[i].core == target)
+			root->parents[i].core = NULL;
+
+	hlist_for_each_entry(child, &root->children, child_node)
+		clk_core_evict_parent_cache_subtree(child, target);
+}
+
+/* Remove this clk from all parent caches */
+static void clk_core_evict_parent_cache(struct clk_core *core)
+{
+	struct hlist_head **lists;
+	struct clk_core *root;
+
+	lockdep_assert_held(&prepare_lock);
+
+	for (lists = all_lists; *lists; lists++)
+		hlist_for_each_entry(root, *lists, child_node)
+			clk_core_evict_parent_cache_subtree(root, core);
+
+}
+
 /**
  * clk_unregister - unregister a currently registered clock
  * @clk: clock to unregister
@@ -3775,6 +3806,8 @@ void clk_unregister(struct clk *clk)
 			clk_core_set_parent_nolock(child, NULL);
 	}
 
+	clk_core_evict_parent_cache(clk->core);
+
 	hlist_del_init(&clk->core->child_node);
 
 	if (clk->core->prepare_count)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ