[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1406566438.25428.6.camel@buesod1.americas.hpqcorp.net>
Date: Mon, 28 Jul 2014 09:53:58 -0700
From: Davidlohr Bueso <davidlohr@...com>
To: Jason Low <jason.low2@...com>
Cc: peterz@...radead.org, mingo@...nel.org, aswin@...com,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH -tip/master 3/7] locking/mcs: Remove obsolete comment
On Mon, 2014-07-28 at 09:49 -0700, Jason Low wrote:
> On Sun, 2014-07-27 at 22:18 -0700, Davidlohr Bueso wrote:
> > ... as we clearly inline mcs_spin_lock() now.
> >
> > Signed-off-by: Davidlohr Bueso <davidlohr@...com>
> > ---
> > kernel/locking/mcs_spinlock.h | 3 ---
> > 1 file changed, 3 deletions(-)
> >
> > diff --git a/kernel/locking/mcs_spinlock.h b/kernel/locking/mcs_spinlock.h
> > index 23e89c5..4d60986 100644
> > --- a/kernel/locking/mcs_spinlock.h
> > +++ b/kernel/locking/mcs_spinlock.h
> > @@ -56,9 +56,6 @@ do { \
> > * If the lock has already been acquired, then this will proceed to spin
> > * on this node->locked until the previous lock holder sets the node->locked
> > * in mcs_spin_unlock().
> > - *
> > - * We don't inline mcs_spin_lock() so that perf can correctly account for the
> > - * time spent in this lock function.
> > */
> > static inline
> > void mcs_spin_lock(struct mcs_spinlock **lock, struct mcs_spinlock *node)
>
> Likewise, I'm wondering if we should make this function noinline so that
> "perf can correctly account for the time spent in this lock function".
Well, it's not hard to see where the contention is when working on
locking issues with perf. With mutexes there are only two sources,
either the task is just spinning trying to get the lock, or its gone to
the slowpath, and you can see a lot of contention on the wait_lock.
So unless I'm missing something, I don't think we'd need to make this
noinline again -- although I forget why it was changed in the first
place.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists