[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <554B8860.6010602@redhat.com>
Date: Thu, 07 May 2015 11:44:32 -0400
From: Rik van Riel <riel@...hat.com>
To: Frederic Weisbecker <fweisbec@...il.com>
CC: paulmck@...ux.vnet.ibm.com, Paolo Bonzini <pbonzini@...hat.com>,
Ingo Molnar <mingo@...nel.org>,
Andy Lutomirski <luto@...capital.net>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
X86 ML <x86@...nel.org>, williams@...hat.com,
Andrew Lutomirski <luto@...nel.org>, fweisbec@...hat.com,
Peter Zijlstra <peterz@...radead.org>,
Heiko Carstens <heiko.carstens@...ibm.com>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>,
Linus Torvalds <torvalds@...ux-foundation.org>
Subject: Re: question about RCU dynticks_nesting
On 05/06/2015 08:59 PM, Frederic Weisbecker wrote:
> On Mon, May 04, 2015 at 04:53:16PM -0400, Rik van Riel wrote:
>> Ingo's idea is to simply have cpu 0 check the current task
>> on all other CPUs, see whether that task is running in system
>> mode, user mode, guest mode, irq mode, etc and update that
>> task's vtime accordingly.
>>
>> I suspect the runqueue lock is probably enough to do that,
>> and between rcu state and PF_VCPU we probably have enough
>> information to see what mode the task is running in, with
>> just remote memory reads.
>
> Note that we could significantly reduce the overhead of vtime accounting
> by only accumulate utime/stime on per cpu buffers and actually account it
> on context switch or task_cputime() calls. That way we remove the overhead
> of the account_user/system_time() functions and the vtime locks.
>
> But doing the accounting from CPU 0 by just accounting 1 tick to the context
> we remotely observe would certainly reduce the local accounting overhead to the strict
> minimum. And I think we shouldn't even lock rq for that, we can live with some
> lack of precision.
We can live with lack of precision, but we cannot live with data
structures being re-used and pointers pointing off into la-la
land while we are following them :)
> Now we must expect quite some overhead on CPU 0. Perhaps it should be
> an option as I'm not sure every full dynticks usecases want that.
Lets see if I can get this to work before deciding whether we need yet
another configurable option :)
It may be possible to have most of the overhead happen from schedulable
context, maybe softirq code. Right now I am still stuck in the giant
spaghetti mess under account_process_tick, with dozens of functions that
only work on cpu-local, task-local, or architecture dependently cpu or
task local data...
--
All rights reversed
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists