[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150305160004.GE5074@lerouge>
Date: Thu, 5 Mar 2015 17:00:05 +0100
From: Frederic Weisbecker <fweisbec@...il.com>
To: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
Cc: Jason Low <jason.low2@...com>,
Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...nel.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Oleg Nesterov <oleg@...hat.com>,
Mike Galbraith <umgwanakikbuti@...il.com>,
Rik van Riel <riel@...hat.com>,
Steven Rostedt <rostedt@...dmis.org>,
Scott Norton <scott.norton@...com>,
Aswin Chandramouleeswaran <aswin@...com>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2] sched, timer: Use atomics for thread_group_cputimer
to improve scalability
On Thu, Mar 05, 2015 at 07:56:59AM -0800, Paul E. McKenney wrote:
> On Thu, Mar 05, 2015 at 04:35:09PM +0100, Frederic Weisbecker wrote:
> > So, in the case we are calling that right after setting cputimer->running, I guess we are fine
> > because we just updated cputimer with the freshest values.
> >
> > But if we are reading this a while after, say several ticks further, there is a chance that
> > we read stale values since we don't lock anymore.
> >
> > I don't know if it matters or not, I guess it depends how stale it can be and how much precision
> > we expect from posix cpu timers. It probably doesn't matter.
> >
> > But just in case, atomic64_read_return(&cputimer->utime, 0) would make sure we get the freshest
> > value because it performs a full barrier, at the cost of more overhead of course.
>
> Well, if we are running within a guest OS, we might be delayed at any point
> for quite some time. Even with interrupts disabled.
You mean delayed because of the overhead of atomic_add_return() or the stale value
of cptimer-> fields?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists