[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190721081933-mutt-send-email-mst@kernel.org>
Date: Sun, 21 Jul 2019 08:28:05 -0400
From: "Michael S. Tsirkin" <mst@...hat.com>
To: paulmck@...ux.vnet.ibm.com
Cc: aarcange@...hat.com, akpm@...ux-foundation.org,
christian@...uner.io, davem@...emloft.net, ebiederm@...ssion.com,
elena.reshetova@...el.com, guro@...com, hch@...radead.org,
james.bottomley@...senpartnership.com, jasowang@...hat.com,
jglisse@...hat.com, keescook@...omium.org, ldv@...linux.org,
linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
linux-mm@...ck.org, linux-parisc@...r.kernel.org,
luto@...capital.net, mhocko@...e.com, mingo@...nel.org,
namit@...are.com, peterz@...radead.org,
syzkaller-bugs@...glegroups.com, viro@...iv.linux.org.uk,
wad@...omium.org
Subject: RFC: call_rcu_outstanding (was Re: WARNING in __mmdrop)
Hi Paul, others,
So it seems that vhost needs to call kfree_rcu from an ioctl. My worry
is what happens if userspace starts cycling through lots of these
ioctls. Given we actually use rcu as an optimization, we could just
disable the optimization temporarily - but the question would be how to
detect an excessive rate without working too hard :) .
I guess we could define as excessive any rate where callback is
outstanding at the time when new structure is allocated. I have very
little understanding of rcu internals - so I wanted to check that the
following more or less implements this heuristic before I spend time
actually testing it.
Could others pls take a look and let me know?
Thanks!
Signed-off-by: Michael S. Tsirkin <mst@...hat.com>
diff --git a/kernel/rcu/tiny.c b/kernel/rcu/tiny.c
index 477b4eb44af5..067909521d72 100644
--- a/kernel/rcu/tiny.c
+++ b/kernel/rcu/tiny.c
@@ -125,6 +125,25 @@ void synchronize_rcu(void)
}
EXPORT_SYMBOL_GPL(synchronize_rcu);
+/*
+ * Helpful for rate-limiting kfree_rcu/call_rcu callbacks.
+ */
+bool call_rcu_outstanding(void)
+{
+ unsigned long flags;
+ struct rcu_data *rdp;
+ bool outstanding;
+
+ local_irq_save(flags);
+ rdp = this_cpu_ptr(&rcu_data);
+ outstanding = rcu_segcblist_empty(&rdp->cblist);
+ outstanding = rcu_ctrlblk.donetail != rcu_ctrlblk.curtail;
+ local_irq_restore(flags);
+
+ return outstanding;
+}
+EXPORT_SYMBOL_GPL(call_rcu_outstanding);
+
/*
* Post an RCU callback to be invoked after the end of an RCU grace
* period. But since we have but one CPU, that would be after any
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index a14e5fbbea46..d4b9d61e637d 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -2482,6 +2482,24 @@ static void rcu_leak_callback(struct rcu_head *rhp)
{
}
+/*
+ * Helpful for rate-limiting kfree_rcu/call_rcu callbacks.
+ */
+bool call_rcu_outstanding(void)
+{
+ unsigned long flags;
+ struct rcu_data *rdp;
+ bool outstanding;
+
+ local_irq_save(flags);
+ rdp = this_cpu_ptr(&rcu_data);
+ outstanding = rcu_segcblist_empty(&rdp->cblist);
+ local_irq_restore(flags);
+
+ return outstanding;
+}
+EXPORT_SYMBOL_GPL(call_rcu_outstanding);
+
/*
* Helper function for call_rcu() and friends. The cpu argument will
* normally be -1, indicating "currently running CPU". It may specify
Powered by blists - more mailing lists