[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <20180525201141.GG3803@linux.vnet.ibm.com>
Date: Fri, 25 May 2018 13:11:41 -0700
From: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To: Steven Rostedt <rostedt@...dmis.org>
Cc: LKML <linux-kernel@...r.kernel.org>,
Joel Fernandes <joel@...lfernandes.org>,
Peter Zilstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>,
Boqun Feng <boqun.feng@...il.com>, byungchul.park@....com,
kernel-team@...roid.com, Josh Triplett <josh@...htriplett.org>,
Lai Jiangshan <jiangshanlai@...il.com>,
Mathieu Desnoyers <mathieu.desnoyers@...icios.com>
Subject: Re: [PATCH v4] rcu: Speed up calling of RCU tasks callbacks
On Thu, May 24, 2018 at 06:49:46PM -0400, Steven Rostedt wrote:
>
> From: Steven Rostedt (VMware) <rostedt@...dmis.org>
>
> Joel Fernandes found that the synchronize_rcu_tasks() was taking a
> significant amount of time. He demonstrated it with the following test:
>
> # cd /sys/kernel/tracing
> # while [ 1 ]; do x=1; done &
> # echo '__schedule_bug:traceon' > set_ftrace_filter
> # time echo '!__schedule_bug:traceon' > set_ftrace_filter;
>
> real 0m1.064s
> user 0m0.000s
> sys 0m0.004s
>
> Where it takes a little over a second to perform the synchronize,
> because there's a loop that waits 1 second at a time for tasks to get
> through their quiescent points when there's a task that must be waited
> for.
>
> After discussion we came up with a simple way to wait for holdouts but
> increase the time for each iteration of the loop but no more than a
> full second.
>
> With the new patch we have:
>
> # time echo '!__schedule_bug:traceon' > set_ftrace_filter;
>
> real 0m0.131s
> user 0m0.000s
> sys 0m0.004s
>
> Which drops it down to 13% of what the original wait time was.
>
> Link: http://lkml.kernel.org/r/20180523063815.198302-2-joel@joelfernandes.org
> Reported-by: Joel Fernandes (Google) <joel@...lfernandes.org>
> Suggested-by: Joel Fernandes (Google) <joel@...lfernandes.org>
> Signed-off-by: Steven Rostedt (VMware) <rostedt@...dmis.org>
I queued both commits, thank you all!
Thanx, Paul
> ---
> diff --git a/kernel/rcu/update.c b/kernel/rcu/update.c
> index 68fa19a5e7bd..452e47841a86 100644
> --- a/kernel/rcu/update.c
> +++ b/kernel/rcu/update.c
> @@ -715,6 +715,7 @@ static int __noreturn rcu_tasks_kthread(void *arg)
> struct rcu_head *list;
> struct rcu_head *next;
> LIST_HEAD(rcu_tasks_holdouts);
> + int fract;
>
> /* Run on housekeeping CPUs by default. Sysadm can move if desired. */
> housekeeping_affine(current, HK_FLAG_RCU);
> @@ -796,13 +797,25 @@ static int __noreturn rcu_tasks_kthread(void *arg)
> * holdouts. When the list is empty, we are done.
> */
> lastreport = jiffies;
> - while (!list_empty(&rcu_tasks_holdouts)) {
> +
> + /* Start off with HZ/10 wait and slowly back off to 1 HZ wait*/
> + fract = 10;
> +
> + for (;;) {
> bool firstreport;
> bool needreport;
> int rtst;
> struct task_struct *t1;
>
> - schedule_timeout_interruptible(HZ);
> + if (list_empty(&rcu_tasks_holdouts))
> + break;
> +
> + /* Slowly back off waiting for holdouts */
> + schedule_timeout_interruptible(HZ/fract);
> +
> + if (fract > 1)
> + fract--;
> +
> rtst = READ_ONCE(rcu_task_stall_timeout);
> needreport = rtst > 0 &&
> time_after(jiffies, lastreport + rtst);
>
Powered by blists - more mailing lists