[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1429124961.7039.120.camel@j-VirtualBox>
Date: Wed, 15 Apr 2015 12:09:21 -0700
From: Jason Low <jason.low2@...com>
To: Preeti U Murthy <preeti@...ux.vnet.ibm.com>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>,
Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...nel.org>,
Thomas Gleixner <tglx@...utronix.de>,
linux-kernel@...r.kernel.org,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Oleg Nesterov <oleg@...hat.com>,
Mike Galbraith <umgwanakikbuti@...il.com>,
Frederic Weisbecker <fweisbec@...il.com>,
Mel Gorman <mgorman@...e.de>,
Steven Rostedt <rostedt@...dmis.org>, hideaki.kimura@...com,
Aswin Chandramouleeswaran <aswin@...com>,
Scott J Norton <scott.norton@...com>, jason.low2@...com
Subject: Re: [PATCH 2/3] sched, timer: Use atomics for thread_group_cputimer
to improve scalability
On Wed, 2015-04-15 at 16:07 +0530, Preeti U Murthy wrote:
> On 04/15/2015 04:39 AM, Jason Low wrote:
> > /*
> > @@ -885,11 +890,8 @@ static void check_thread_timers(struct task_struct *tsk,
> > static void stop_process_timers(struct signal_struct *sig)
> > {
> > struct thread_group_cputimer *cputimer = &sig->cputimer;
> > - unsigned long flags;
> >
> > - raw_spin_lock_irqsave(&cputimer->lock, flags);
> > - cputimer->running = 0;
> > - raw_spin_unlock_irqrestore(&cputimer->lock, flags);
> > + WRITE_ONCE(cputimer->running, 0);
>
> Why do a WRITE_ONCE() here ?
Perhaps Peter can confirm/elaborate, but since we're now updating the
running field without the lock, we use WRITE_ONCE to guarantee that this
doesn't get optimized in any way. This can also serve as "documentation"
that we're writing to a shared variable without a lock.
> Maybe you should explicitly mention this
> through a comment like Steven pointed out about all
> WRITE/READ/ACCESS_ONCE() usage.
Yeah, we should add a comment here.
Thanks,
Jason
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists