lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Wed, 6 Jan 2016 10:49:34 -0800
From:	tip-bot for Jiri Olsa <tipbot@...or.com>
To:	linux-tip-commits@...r.kernel.org
Cc:	linux-kernel@...r.kernel.org, efault@....de, dzickus@...hat.com,
	peterz@...radead.org, torvalds@...ux-foundation.org,
	acme@...nel.org, tglx@...utronix.de, jolsa@...nel.org,
	jmario@...hat.com, hpa@...or.com, mingo@...nel.org
Subject: [tip:sched/core] sched/core: Move sched_entity::
 avg into separate cache line

Commit-ID:  5a1078043f844074cbd53981432778a8d5dd56e9
Gitweb:     http://git.kernel.org/tip/5a1078043f844074cbd53981432778a8d5dd56e9
Author:     Jiri Olsa <jolsa@...nel.org>
AuthorDate: Tue, 8 Dec 2015 21:23:59 +0100
Committer:  Ingo Molnar <mingo@...nel.org>
CommitDate: Wed, 6 Jan 2016 11:06:14 +0100

sched/core: Move sched_entity::avg into separate cache line

The sched_entity::avg collides with read-mostly sched_entity data.

The perf c2c tool showed many read HITM accesses across
many CPUs for sched_entity's cfs_rq and my_q, while having
at the same time tons of stores for avg.

After placing sched_entity::avg into separate cache line,
the perf bench sched pipe showed around 20 seconds speedup.

NOTE I cut out all perf events except for cycles and
instructions from following output.

Before:
  $ perf stat -r 5 perf bench sched pipe -l 10000000
  # Running 'sched/pipe' benchmark:
  # Executed 10000000 pipe operations between two processes

       Total time: 270.348 [sec]

        27.034805 usecs/op
            36989 ops/sec
   ...

     245,537,074,035      cycles                    #    1.433 GHz
     187,264,548,519      instructions              #    0.77  insns per cycle

       272.653840535 seconds time elapsed           ( +-  1.31% )

After:
  $ perf stat -r 5 perf bench sched pipe -l 10000000
  # Running 'sched/pipe' benchmark:
  # Executed 10000000 pipe operations between two processes

       Total time: 251.076 [sec]

        25.107678 usecs/op
            39828 ops/sec
  ...

     244,573,513,928      cycles                    #    1.572 GHz
     187,409,641,157      instructions              #    0.76  insns per cycle

       251.679315188 seconds time elapsed           ( +-  0.31% )

Signed-off-by: Jiri Olsa <jolsa@...nel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Cc: Arnaldo Carvalho de Melo <acme@...nel.org>
Cc: Don Zickus <dzickus@...hat.com>
Cc: Joe Mario <jmario@...hat.com>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Mike Galbraith <efault@....de>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Thomas Gleixner <tglx@...utronix.de>
Link: http://lkml.kernel.org/r/1449606239-28602-1-git-send-email-jolsa@kernel.org
Signed-off-by: Ingo Molnar <mingo@...nel.org>
---
 include/linux/sched.h | 9 +++++++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index 791b47e..0c0e781 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1268,8 +1268,13 @@ struct sched_entity {
 #endif
 
 #ifdef CONFIG_SMP
-	/* Per entity load average tracking */
-	struct sched_avg	avg;
+	/*
+	 * Per entity load average tracking.
+	 *
+	 * Put into separate cache line so it does not
+	 * collide with read-mostly values above.
+	 */
+	struct sched_avg	avg ____cacheline_aligned_in_smp;
 #endif
 };
 
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ