lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <tip-052b0f6eaf8b1f02669884a177bc3ce463133a42@git.kernel.org>
Date:	Fri, 1 May 2015 03:15:29 -0700
From:	tip-bot for Davidlohr Bueso <tipbot@...or.com>
To:	linux-tip-commits@...r.kernel.org
Cc:	mingo@...nel.org, mgorman@...e.de, dave@...olabs.net,
	acme@...hat.com, dbueso@...e.de, hpa@...or.com,
	linux-kernel@...r.kernel.org, tglx@...utronix.de
Subject: [tip:perf/urgent] perf bench futex:
  Fix hung wakeup tasks after requeueing

Commit-ID:  052b0f6eaf8b1f02669884a177bc3ce463133a42
Gitweb:     http://git.kernel.org/tip/052b0f6eaf8b1f02669884a177bc3ce463133a42
Author:     Davidlohr Bueso <dave@...olabs.net>
AuthorDate: Fri, 24 Apr 2015 10:00:48 -0700
Committer:  Arnaldo Carvalho de Melo <acme@...hat.com>
CommitDate: Mon, 27 Apr 2015 13:57:49 -0300

perf bench futex: Fix hung wakeup tasks after requeueing

The futex-requeue benchmark can hang because of missing wakeups once the
benchmark is done, ie:

[Run 1]: Requeued 1024 of 1024 threads in 0.3290 ms
perf: couldn't wakeup all tasks (135/1024)

This bug, while perhaps suggesting missing wakeups in kernel futex code,
is merely a consequence of the crappy FUTEX_CMP_REQUEUE man page,
incorrectly mentioning that the number of requeued tasks is in fact
returned, not the wakeups.

This patch acknowledges this and updates the corresponding futex_wake
code around it.

Signed-off-by: Davidlohr Bueso <dbueso@...e.de>
Cc: Mel Gorman <mgorman@...e.de>
Link: http://lkml.kernel.org/r/1429894848.10273.44.camel@stgolabs.net
Signed-off-by: Arnaldo Carvalho de Melo <acme@...hat.com>
---
 tools/perf/bench/futex-requeue.c | 15 ++++++++-------
 1 file changed, 8 insertions(+), 7 deletions(-)

diff --git a/tools/perf/bench/futex-requeue.c b/tools/perf/bench/futex-requeue.c
index bedff6b..ad0d9b5 100644
--- a/tools/perf/bench/futex-requeue.c
+++ b/tools/perf/bench/futex-requeue.c
@@ -132,6 +132,9 @@ int bench_futex_requeue(int argc, const char **argv,
 	if (!fshared)
 		futex_flag = FUTEX_PRIVATE_FLAG;
 
+	if (nrequeue > nthreads)
+		nrequeue = nthreads;
+
 	printf("Run summary [PID %d]: Requeuing %d threads (from [%s] %p to %p), "
 	       "%d at a time.\n\n",  getpid(), nthreads,
 	       fshared ? "shared":"private", &futex1, &futex2, nrequeue);
@@ -161,20 +164,18 @@ int bench_futex_requeue(int argc, const char **argv,
 
 		/* Ok, all threads are patiently blocked, start requeueing */
 		gettimeofday(&start, NULL);
-		for (nrequeued = 0; nrequeued < nthreads; nrequeued += nrequeue) {
+		while (nrequeued < nthreads) {
 			/*
 			 * Do not wakeup any tasks blocked on futex1, allowing
 			 * us to really measure futex_wait functionality.
 			 */
-			futex_cmp_requeue(&futex1, 0, &futex2, 0,
-					  nrequeue, futex_flag);
+			nrequeued += futex_cmp_requeue(&futex1, 0, &futex2, 0,
+						       nrequeue, futex_flag);
 		}
+
 		gettimeofday(&end, NULL);
 		timersub(&end, &start, &runtime);
 
-		if (nrequeued > nthreads)
-			nrequeued = nthreads;
-
 		update_stats(&requeued_stats, nrequeued);
 		update_stats(&requeuetime_stats, runtime.tv_usec);
 
@@ -184,7 +185,7 @@ int bench_futex_requeue(int argc, const char **argv,
 		}
 
 		/* everybody should be blocked on futex2, wake'em up */
-		nrequeued = futex_wake(&futex2, nthreads, futex_flag);
+		nrequeued = futex_wake(&futex2, nrequeued, futex_flag);
 		if (nthreads != nrequeued)
 			warnx("couldn't wakeup all tasks (%d/%d)", nrequeued, nthreads);
 
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ