[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1477342613-9938-2-git-send-email-dave@stgolabs.net>
Date: Mon, 24 Oct 2016 13:56:52 -0700
From: Davidlohr Bueso <dave@...olabs.net>
To: acme@...nel.org
Cc: mingo@...nel.org, linux-kernel@...r.kernel.org,
Davidlohr Bueso <dave@...olabs.net>,
Davidlohr Bueso <dbueso@...e.de>
Subject: [PATCH 1/2] perf/bench-futex: Avoid worker cacheline bouncing
Sebastian noted that overhead for worker thread ops (throughput)
accounting was producing 'perf' to appear in the profiles, consuming
a non-trivial (ie 13%) amount of CPU. This is due to cacheline
bouncing due to the increment of w->ops. We can easily fix this by
just working on a local copy and updating the actual worker once
done running, and ready to show the program summary. There is no
danger of the worker being concurrent, so we can trust that no stale
value is being seen by another thread.
This also gets rid of the unnecesary cache alignment hack; its not
worth it.
Reported-by: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
Acked-by: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
Signed-off-by: Davidlohr Bueso <dbueso@...e.de>
---
tools/perf/bench/futex-hash.c | 11 +++++------
tools/perf/bench/futex-lock-pi.c | 4 +++-
2 files changed, 8 insertions(+), 7 deletions(-)
diff --git a/tools/perf/bench/futex-hash.c b/tools/perf/bench/futex-hash.c
index d9e5e80bb4d0..da04b8c5568a 100644
--- a/tools/perf/bench/futex-hash.c
+++ b/tools/perf/bench/futex-hash.c
@@ -39,15 +39,12 @@ static unsigned int threads_starting;
static struct stats throughput_stats;
static pthread_cond_t thread_parent, thread_worker;
-#define SMP_CACHE_BYTES 256
-#define __cacheline_aligned __attribute__ ((aligned (SMP_CACHE_BYTES)))
-
struct worker {
int tid;
u_int32_t *futex;
pthread_t thread;
unsigned long ops;
-} __cacheline_aligned;
+};
static const struct option options[] = {
OPT_UINTEGER('t', "threads", &nthreads, "Specify amount of threads"),
@@ -66,8 +63,9 @@ static const char * const bench_futex_hash_usage[] = {
static void *workerfn(void *arg)
{
int ret;
- unsigned int i;
struct worker *w = (struct worker *) arg;
+ unsigned int i;
+ unsigned long ops = w->ops; /* avoid cacheline bouncing */
pthread_mutex_lock(&thread_lock);
threads_starting--;
@@ -77,7 +75,7 @@ static void *workerfn(void *arg)
pthread_mutex_unlock(&thread_lock);
do {
- for (i = 0; i < nfutexes; i++, w->ops++) {
+ for (i = 0; i < nfutexes; i++, ops++) {
/*
* We want the futex calls to fail in order to stress
* the hashing of uaddr and not measure other steps,
@@ -91,6 +89,7 @@ static void *workerfn(void *arg)
}
} while (!done);
+ w->ops = ops;
return NULL;
}
diff --git a/tools/perf/bench/futex-lock-pi.c b/tools/perf/bench/futex-lock-pi.c
index 936d89d30483..7032e4643c65 100644
--- a/tools/perf/bench/futex-lock-pi.c
+++ b/tools/perf/bench/futex-lock-pi.c
@@ -75,6 +75,7 @@ static void toggle_done(int sig __maybe_unused,
static void *workerfn(void *arg)
{
struct worker *w = (struct worker *) arg;
+ unsigned long ops = w->ops;
pthread_mutex_lock(&thread_lock);
threads_starting--;
@@ -103,9 +104,10 @@ static void *workerfn(void *arg)
if (ret && !silent)
warn("thread %d: Could not unlock pi-lock for %p (%d)",
w->tid, w->futex, ret);
- w->ops++; /* account for thread's share of work */
+ ops++; /* account for thread's share of work */
} while (!done);
+ w->ops = ops;
return NULL;
}
--
2.6.6
Powered by blists - more mailing lists