[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150507005904.GA22006@lerouge>
Date: Thu, 7 May 2015 02:59:05 +0200
From: Frederic Weisbecker <fweisbec@...il.com>
To: Rik van Riel <riel@...hat.com>
Cc: paulmck@...ux.vnet.ibm.com, Paolo Bonzini <pbonzini@...hat.com>,
Ingo Molnar <mingo@...nel.org>,
Andy Lutomirski <luto@...capital.net>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
X86 ML <x86@...nel.org>, williams@...hat.com,
Andrew Lutomirski <luto@...nel.org>, fweisbec@...hat.com,
Peter Zijlstra <peterz@...radead.org>,
Heiko Carstens <heiko.carstens@...ibm.com>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>,
Linus Torvalds <torvalds@...ux-foundation.org>
Subject: Re: question about RCU dynticks_nesting
On Mon, May 04, 2015 at 04:53:16PM -0400, Rik van Riel wrote:
> On 05/04/2015 04:38 PM, Paul E. McKenney wrote:
> > On Mon, May 04, 2015 at 04:13:50PM -0400, Rik van Riel wrote:
> >> On 05/04/2015 04:02 PM, Paul E. McKenney wrote:
>
> >>> Hmmm... But didn't earlier performance measurements show that the bulk of
> >>> the overhead was the delta-time computations rather than RCU accounting?
> >>
> >> The bulk of the overhead was disabling and re-enabling
> >> irqs around the calls to rcu_user_exit and rcu_user_enter :)
> >
> > Really??? OK... How about software irq masking? (I know, that is
> > probably a bit of a scary change as well.)
> >
> >> Of the remaining time, about 2/3 seems to be the vtime
> >> stuff, and the other 1/3 the rcu code.
> >
> > OK, worth some thought, then.
> >
> >> I suspect it makes sense to optimize both, though the
> >> vtime code may be the easiest :)
> >
> > Making a crude version that does jiffies (or whatever) instead of
> > fine-grained computations might give good bang for the buck. ;-)
>
> Ingo's idea is to simply have cpu 0 check the current task
> on all other CPUs, see whether that task is running in system
> mode, user mode, guest mode, irq mode, etc and update that
> task's vtime accordingly.
>
> I suspect the runqueue lock is probably enough to do that,
> and between rcu state and PF_VCPU we probably have enough
> information to see what mode the task is running in, with
> just remote memory reads.
Note that we could significantly reduce the overhead of vtime accounting
by only accumulate utime/stime on per cpu buffers and actually account it
on context switch or task_cputime() calls. That way we remove the overhead
of the account_user/system_time() functions and the vtime locks.
But doing the accounting from CPU 0 by just accounting 1 tick to the context
we remotely observe would certainly reduce the local accounting overhead to the strict
minimum. And I think we shouldn't even lock rq for that, we can live with some
lack of precision.
Now we must expect quite some overhead on CPU 0. Perhaps it should be
an option as I'm not sure every full dynticks usecases want that.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists