[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1425331172.5304.50.camel@j-VirtualBox>
Date: Mon, 02 Mar 2015 13:19:32 -0800
From: Jason Low <jason.low2@...com>
To: Oleg Nesterov <oleg@...hat.com>
Cc: Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...nel.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Mike Galbraith <umgwanakikbuti@...il.com>,
Frederic Weisbecker <fweisbec@...il.com>,
Rik van Riel <riel@...hat.com>,
Steven Rostedt <rostedt@...dmis.org>,
Scott Norton <scott.norton@...com>,
Aswin Chandramouleeswaran <aswin@...com>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2] sched, timer: Use atomics for thread_group_cputimer
to improve scalability
On Mon, 2015-03-02 at 20:40 +0100, Oleg Nesterov wrote:
> Well, I forgot everything about this code, but let me ask anyway ;)
>
> On 03/02, Jason Low wrote:
> > @@ -222,13 +239,10 @@ void thread_group_cputimer(struct task_struct *tsk, struct task_cputime *times)
> > * it.
> > */
> > thread_group_cputime(tsk, &sum);
> > - raw_spin_lock_irqsave(&cputimer->lock, flags);
> > - cputimer->running = 1;
> > - update_gt_cputime(&cputimer->cputime, &sum);
> > - } else
> > - raw_spin_lock_irqsave(&cputimer->lock, flags);
> > - *times = cputimer->cputime;
> > - raw_spin_unlock_irqrestore(&cputimer->lock, flags);
> > + update_gt_cputime(cputimer, &sum);
> > + ACCESS_ONCE(cputimer->running) = 1;
>
> WRITE_ONCE() looks better...
Okay, I can update that.
> but it is not clear to me why do we need it
> at all.
Peter suggested it here as we would now be updating the running field
without the lock:
https://lkml.org/lkml/2015/1/23/641
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists