lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:	Thu,  9 Apr 2015 12:37:40 -0400
From:	Kan Liang <kan.liang@...el.com>
To:	a.p.zijlstra@...llo.nl, linux-kernel@...r.kernel.org
Cc:	mingo@...nel.org, acme@...radead.org, eranian@...gle.com,
	andi@...stfloor.org, Kan Liang <kan.liang@...el.com>
Subject: [PATCH V6 0/6] large PEBS interrupt threshold

This patch series implements large PEBS interrupt threshold.
Currently, the PEBS threshold is forced to set to one. A larger PEBS
interrupt threshold can significantly reduce the sampling overhead
especially for frequently occurring events
(like cycles or branches or load/stores) with small sampling period.
For example, perf record cycles event when running kernbench
with 10003 sampling period. The Elapsed Time reduced from 32.7 seconds
to 16.5 seconds, which is 2X faster.
For more details, please refer to patch 3's description.

Limitations:
 - It can not supply a callgraph.
 - It requires setting a fixed period.
 - It cannot supply a time stamp.
 - To supply a TID it requires flushing on context switch.
If the above requirement doesn't apply, the threshold will set to one.

Collisions:
When PEBS events happen near to each other, the records for the events
can be collapsed into a single one, and it's not possible to
reconstruct. When collision happens, we drop the PEBS record.
Actually, collisions are extremely rare as long as different events
are used. We once tested the worst case with four frequently occurring
events (cycles:p,instructions:p,branches:p,mem-stores:p).
The collisions rate is only 0.34%.
The only way you can get a lot of collision is when you count the same
thing multiple times. But it is not a useful configuration.
For details about collisions, please refer to patch 4's description.

changes since v1:
  - drop patch 'perf, core: Add all PMUs to pmu_idr'
  - add comments for case that multiple counters overflow simultaneously
changes since v2:
  - rename perf_sched_cb_{enable,disable} to perf_sched_cb_user_{inc,dec}
  - use flag to indicate auto reload mechanism
  - move codes that setup PEBS sample data to separate function
  - output the PEBS records in batch
  - enable this for All (PEBS capable) hardware
  - more description for the multiplex
changes since v3:
  - ignore conflicting PEBS record
changes since v4:
  - Do more tests for collision and update comments
changes since v5:
  - Move autoreload and large PEBS available check to intel_pmu_hw_config
  - make AUTO_RELOAD conditional on large PEBS
  - !PEBS bug fix
  - coherent story about what is collision and how we handle it.
  - Remove extra state pebs_sched_cb_enabled

Yan, Zheng (6):
  perf, x86: use the PEBS auto reload mechanism when possible
  perf, x86: introduce setup_pebs_sample_data()
  perf, x86: large PEBS interrupt threshold
  perf, x86: handle multiple records in PEBS buffer
  perf, x86: drain PEBS buffer during context switch
  perf, x86: enlarge PEBS buffer

 arch/x86/kernel/cpu/perf_event.c           |  15 +-
 arch/x86/kernel/cpu/perf_event.h           |  15 +-
 arch/x86/kernel/cpu/perf_event_intel.c     |  21 +-
 arch/x86/kernel/cpu/perf_event_intel_ds.c  | 316 ++++++++++++++++++++++-------
 arch/x86/kernel/cpu/perf_event_intel_lbr.c |   3 -
 include/linux/perf_event.h                 |   4 +
 kernel/events/core.c                       |   6 +-
 7 files changed, 293 insertions(+), 87 deletions(-)

-- 
1.7.11.7

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ