[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20130108210035.GS2525@linux.vnet.ibm.com>
Date: Tue, 8 Jan 2013 13:00:35 -0800
From: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To: Steven Rostedt <rostedt@...dmis.org>
Cc: Frederic Weisbecker <fweisbec@...il.com>,
LKML <linux-kernel@...r.kernel.org>,
Alessio Igor Bogani <abogani@...nel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Chris Metcalf <cmetcalf@...era.com>,
Christoph Lameter <cl@...ux.com>,
Geoff Levand <geoff@...radead.org>,
Gilad Ben Yossef <gilad@...yossef.com>,
Hakan Akkan <hakanakkan@...il.com>,
Ingo Molnar <mingo@...nel.org>,
Li Zhong <zhong@...ux.vnet.ibm.com>,
Namhyung Kim <namhyung.kim@....com>,
Paul Gortmaker <paul.gortmaker@...driver.com>,
Peter Zijlstra <peterz@...radead.org>,
Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [PATCH 03/33] cputime: Generic on-demand virtual cputime
accounting
On Tue, Jan 08, 2013 at 03:26:11PM -0500, Steven Rostedt wrote:
> On Tue, 2013-01-08 at 03:08 +0100, Frederic Weisbecker wrote:
>
> > diff --git a/kernel/context_tracking.c b/kernel/context_tracking.c
> > index c952770..bd461ad 100644
> > --- a/kernel/context_tracking.c
> > +++ b/kernel/context_tracking.c
> > @@ -56,7 +56,7 @@ void user_enter(void)
> > local_irq_save(flags);
> > if (__this_cpu_read(context_tracking.active) &&
> > __this_cpu_read(context_tracking.state) != IN_USER) {
> > - __this_cpu_write(context_tracking.state, IN_USER);
> > + vtime_user_enter(current);
> > /*
> > * At this stage, only low level arch entry code remains and
> > * then we'll run in userspace. We can assume there won't be
> > @@ -65,6 +65,7 @@ void user_enter(void)
> > * on the tick.
> > */
> > rcu_user_enter();
>
> Hmm, the rcu_user_enter() can do quite a bit. Too bad we are accounting
> it as user time. I wonder if we could move the vtime_user_enter() below
> it. But then if vtime_user_enter() calls rcu_read_lock() it breaks.
If RCU_FAST_NO_HZ=y, the current mainline rcu_user_enter() can be a
bit expensive. It is going on a diet for 3.9, however.
But there is a lower limit because the CPU moving to adaptive-tick user
mode must reliably inform other CPUs of this, which involves some
overhead due to memory-ordering issues.
Thanx, Paul
> The notorious chicken vs egg ordeal!
>
> -- Steve
>
> > + __this_cpu_write(context_tracking.state, IN_USER);
> > }
> > local_irq_restore(flags);
> > }
> > @@ -90,12 +91,13 @@ void user_exit(void)
> >
> > local_irq_save(flags);
> > if (__this_cpu_read(context_tracking.state) == IN_USER) {
> > - __this_cpu_write(context_tracking.state, IN_KERNEL);
> > /*
> > * We are going to run code that may use RCU. Inform
> > * RCU core about that (ie: we may need the tick again).
> > */
> > rcu_user_exit();
> > + vtime_user_exit(current);
> > + __this_cpu_write(context_tracking.state, IN_KERNEL);
> > }
> > local_irq_restore(flags);
> > }
>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists