[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1398787205.2970.90.camel@schen9-DESK>
Date: Tue, 29 Apr 2014 09:00:05 -0700
From: Tim Chen <tim.c.chen@...ux.intel.com>
To: paulmck@...ux.vnet.ibm.com
Cc: Davidlohr Bueso <davidlohr@...com>, Ingo Molnar <mingo@...nel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Peter Zijlstra <peterz@...radead.org>,
Andrea Arcangeli <aarcange@...hat.com>,
Alex Shi <alex.shi@...aro.org>,
Andi Kleen <andi@...stfloor.org>,
Michel Lespinasse <walken@...gle.com>,
Rik van Riel <riel@...hat.com>,
Peter Hurley <peter@...leysoftware.com>,
Thomas Gleixner <tglx@...utronix.de>,
Aswin Chandramouleeswaran <aswin@...com>,
"Norton, Scott J" <scott.norton@...com>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2] rwsem: Support optimistic spinning
On Tue, 2014-04-29 at 08:11 -0700, Paul E. McKenney wrote:
> On Mon, Apr 28, 2014 at 05:50:49PM -0700, Tim Chen wrote:
> > On Mon, 2014-04-28 at 16:10 -0700, Paul E. McKenney wrote:
> >
> > > > +#ifdef CONFIG_SMP
> > > > +static inline bool rwsem_can_spin_on_owner(struct rw_semaphore *sem)
> > > > +{
> > > > + int retval;
> > > > + struct task_struct *owner;
> > > > +
> > > > + rcu_read_lock();
> > > > + owner = ACCESS_ONCE(sem->owner);
> > >
> > > OK, I'll bite...
> > >
> > > Why ACCESS_ONCE() instead of rcu_dereference()?
> >
> > We're using it as a speculative check on the sem->owner to see
> > if the owner is running on the cpu. The rcu_read_lock
> > is used for ensuring that the owner->on_cpu memory is
> > still valid.
>
> OK, so if we read complete garbage, all that happens is that we
> lose a bit of performance?
Correct.
> If so, I am OK with it as long as there
> is a comment (which Davidlohr suggested later in this thread).
>
Yes, we should add some comments to clarify things.
Thanks.
Tim
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists