[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20150331072102.GA20651@sejong>
Date: Tue, 31 Mar 2015 16:21:02 +0900
From: Namhyung Kim <namhyung@...nel.org>
To: Arnaldo Carvalho de Melo <arnaldo.melo@...il.com>
Cc: Jiri Olsa <jolsa@...hat.com>, David Ahern <dsahern@...il.com>,
Jiri Olsa <jolsa@...nel.org>,
Stephane Eranian <eranian@...gle.com>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [BUG] segfault in perf-top -- thread refcnt
On Mon, Mar 30, 2015 at 09:46:35PM -0300, Arnaldo Carvalho de Melo wrote:
> Em Tue, Mar 31, 2015 at 09:27:30AM +0900, Namhyung Kim escreveu:
> > On Mon, Mar 30, 2015 at 12:13:03PM -0300, Arnaldo Carvalho de Melo wrote:
> > > Em Mon, Mar 30, 2015 at 11:58:05AM -0300, Arnaldo Carvalho de Melo escreveu:
> > > > Em Mon, Mar 30, 2015 at 09:48:52PM +0900, Namhyung Kim escreveu:
> > > > > But this makes every sample processing grabs and releases the lock so
> > > > > might cause high overhead. It can be a problem if such processing is
> > > > > done parallelly like my multi-thread work. :-/
> > > >
> > > > Still untested, using rw lock, next step is auditing the
> > > > machine__findnew_thread users that really should be using
> > > > machine__find_thread, i.e. grabbing just the reader lock, and measuring
> > > > the overhead of using a pthread rw lock instead of pthread_mutex_t as
> > > > Jiri is doing.
> > >
> > > Don't bother trying it, doesn't even compile ;-\
> >
> > OK. :)
> >
> > But I think rw lock still has not-so-low overhead as it involves
> > atomic operations and cache misses.
>
> But we will have to serialize access to the data structure at some
> point...
Yes, as long as we keep ref-counting.
I'm guessing if we only focus on the perf top case, there might be a
way to cleanup dead threads without ref-counting (i.e. w/o affecting
fastpath on the perf report).
Thanks,
Namhyung
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists