[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180717090245.GB8631@krava>
Date: Tue, 17 Jul 2018 11:02:45 +0200
From: Jiri Olsa <jolsa@...hat.com>
To: Namhyung Kim <namhyung@...nel.org>
Cc: Jiri Olsa <jolsa@...nel.org>,
Arnaldo Carvalho de Melo <acme@...nel.org>,
lkml <linux-kernel@...r.kernel.org>,
Ingo Molnar <mingo@...nel.org>,
David Ahern <dsahern@...il.com>,
Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Kan Liang <kan.liang@...el.com>,
Andi Kleen <ak@...ux.intel.com>,
Lukasz Odzioba <lukasz.odzioba@...el.com>,
Wang Nan <wangnan0@...wei.com>, kernel-team@....com
Subject: Re: [PATCH 1/4] perf tools: Fix struct comm_str removal crash
On Tue, Jul 17, 2018 at 10:49:40AM +0900, Namhyung Kim wrote:
> Hi Jiri,
>
> On Mon, Jul 16, 2018 at 12:29:34PM +0200, Jiri Olsa wrote:
> > On Sun, Jul 15, 2018 at 10:08:27PM +0900, Namhyung Kim wrote:
> >
> > SNIP
> >
> > > > Because thread 2 first decrements the refcnt and only after then it
> > > > removes the struct comm_str from the list, the thread 1 can find this
> > > > object on the list with refcnt equls to 0 and hit the assert.
> > > >
> > > > This patch fixes the thread 2 path, by removing the struct comm_str
> > > > FIRST from the list and only AFTER calling comm_str__put on it. This
> > > > way the thread 1 finds only valid objects on the list.
> > >
> > > I'm not sure we can unconditionally remove the comm_str from the tree.
> > > It should be removed only if refcount is going to zero IMHO.
> > > Otherwise it could end up having multiple comm_str entry for a same
> > > name.
> >
> > right, but it wouldn't crash ;-)
> >
> > how about attached change, that actualy deals with the refcnt
> > race I'm running the tests now, seems ok so far
>
> I think we can keep if the refcount is back to non-zero. What about this?
> (not tested..)
>
>
> static struct comm_str *comm_str__get(cs)
> {
> if (cs)
> refcount_inc_no_warn(&cs->refcnt); // should be added
> return cs;
> }
>
> static void comm_str__put(cs)
> {
> if (cs && refcount_dec_and_test(&cs->refcnt)) {
> down_write(&comm_str_lock);
> /* might race with comm_str__findnew() */
> if (!refcount_read(&cs->refcnt)) {
> rb_erase(&cs->rb_node, &comm_str_root);
> zfree(&cs->str);
> free(cs);
> }
> up_write(&comm_str_lock);
> }
> }
yea, it's more possitive than my patch
I'm testing attached patch, looks good so far
thanks,
jirka
---
diff --git a/tools/include/linux/refcount.h b/tools/include/linux/refcount.h
index 36cb29bc57c2..11e2be6f68a0 100644
--- a/tools/include/linux/refcount.h
+++ b/tools/include/linux/refcount.h
@@ -109,6 +109,14 @@ static inline void refcount_inc(refcount_t *r)
REFCOUNT_WARN(!refcount_inc_not_zero(r), "refcount_t: increment on 0; use-after-free.\n");
}
+/*
+ * Pure refs increase without any chec/warn.
+ */
+static inline void refcount_inc_no_warn(refcount_t *r)
+{
+ atomic_inc(&r->refs);
+}
+
/*
* Similar to atomic_dec_and_test(), it will WARN on underflow and fail to
* decrement when saturated at UINT_MAX.
diff --git a/tools/perf/util/comm.c b/tools/perf/util/comm.c
index 7798a2cc8a86..a2e338cf29d7 100644
--- a/tools/perf/util/comm.c
+++ b/tools/perf/util/comm.c
@@ -21,7 +21,7 @@ static struct rw_semaphore comm_str_lock = {.lock = PTHREAD_RWLOCK_INITIALIZER,}
static struct comm_str *comm_str__get(struct comm_str *cs)
{
if (cs)
- refcount_inc(&cs->refcnt);
+ refcount_inc_no_warn(&cs->refcnt);
return cs;
}
@@ -29,10 +29,12 @@ static void comm_str__put(struct comm_str *cs)
{
if (cs && refcount_dec_and_test(&cs->refcnt)) {
down_write(&comm_str_lock);
- rb_erase(&cs->rb_node, &comm_str_root);
+ if (refcount_read(&cs->refcnt) == 0) {
+ rb_erase(&cs->rb_node, &comm_str_root);
+ zfree(&cs->str);
+ free(cs);
+ }
up_write(&comm_str_lock);
- zfree(&cs->str);
- free(cs);
}
}
Powered by blists - more mailing lists