[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1336271107.1534.12.camel@leonhard>
Date: Sun, 06 May 2012 11:25:07 +0900
From: Namhyung Kim <namhyung@...il.com>
To: Arnaldo Carvalho de Melo <acme@...stprotocols.net>
Cc: Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Paul Mackerras <paulus@...ba.org>,
Ingo Molnar <mingo@...hat.com>,
LKML <linux-kernel@...r.kernel.org>,
Frederic Weisbecker <fweisbec@...il.com>
Subject: Re: [PATCH] perf top: Fix a race in callchain handling
Hi,
2012-05-05 (토), 20:53 -0300, Arnaldo Carvalho de Melo:
> Em Sat, May 05, 2012 at 08:22:47PM +0200, Peter Zijlstra escreveu:
> > On Sun, 2012-05-06 at 00:23 +0900, Namhyung Kim wrote:
> > > + static struct callchain_cursor cursor;
> >
> > This just begs to become another concurrency problem. If anybody manages
> > to call multiple hists__collapse_insert_entry() concurrently you're
> > again up some creek without no paddle.
> >
> > Adding global state is never a good option when dealing with
> > concurrency.
>
> But it seems to fix the current issue, so thanks to Namhyung for
> following up on the report and David Ahern for reporting that it was a
> cross thread corruption (Namhyung, was your work based on that report?).
>
No, I didn't see the David's report since I posted it using my company
email - I don't have an access to the mail outside of the company now.
It seems I have to subscribe the perf-users mailing list though :).
> I'm looking how to get that fixed with Peter concerns addressed.
>
I guess it's gonna be a non-trivial job. As far as I can see, the hists
code can handle up to two concurrent threads regardless of the callchain
cursor problem. And also guess that other areas of libperf also doesn't
support the true concurrency, right?
> First testing Namhyung patch with -F 100000 tho :-)
>
Thanks,
Namhyung
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists