lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <1391432142-18723-1-git-send-email-eranian@google.com>
Date:	Mon,  3 Feb 2014 13:55:32 +0100
From:	Stephane Eranian <eranian@...gle.com>
To:	linux-kernel@...r.kernel.org
Cc:	peterz@...radead.org, mingo@...e.hu, acme@...hat.com,
	ak@...ux.intel.com, zheng.z.yan@...el.com
Subject: [PATCH v1 00/10] perf/x86/uncore: add support for SNB/IVB/HSW integrated memory controller PMU

This patch series adds support for SandyBridge, IvyBridge, Haswell client (desktop/mobile)
processor PCI-based integrated memory controller PMU. This PMU provides a few free running
32-bit counters which can be used to determine memory bandwidth utilization.

The code is based on the documentation at:
http://software.intel.com/en-us/articles/monitoring-integrated-memory-controller-requests-in-the-2nd-3rd-and-4th-generation-intel

The patches implement a new uncore PMU called uncore_imc.
It exports its format and events in sysfs as usual.

The following events are currently defined:
  - name: uncore_imc/data_reads/
  - code: 0x1
  - unit: 64 bytes
  - number of full cacheline (64 bytes) read requests to the IMC

  - name: uncore_imc/data_writes/
  - code: 0x2
  - unit: 64 bytes
  - number of full cacheline (64 bytes) write requests to the IMC

The unit and scale of each event are also exposed in sysfs and
are picked up by perf stat (needs v3.13).

The uncore_imc PMU is by construction system-wide only and counting
mode only. There is no priv level filtering. The kernel enforces
those restrictions. Counters are 32-bit and do not generate overflow
interrupts, therefore the kernel uses a hrtimer to poll the counters
and avoid missing an overflow.

The series includes an optional patch to alter the unit of the event
to be Mebibytes in perf stat. The kernel still exports counts as
64 bytes increments (raw value of the counter).

To use the PMU with perf stat:
 # perf stat -a -e uncore_imc/data_reads/,uncore_imc/data_writes/ -I 1000 sleep 100
   #           time             counts unit events
        1.000169151             180.62 MiB  uncore_imc/data_reads/
        1.000169151               0.14 MiB  uncore_imc/data_writes/  
        2.000506913             180.37 MiB  uncore_imc/data_reads/   
        2.000506913               0.02 MiB  uncore_imc/data_writes/  
        3.000748105             180.32 MiB  uncore_imc/data_reads/   
        3.000748105               0.02 MiB  uncore_imc/data_writes/  
        4.000991441             180.30 MiB  uncore_imc/data_reads/   

Signed-off-by: Stephane Eranian <eranian@...gle.com>

Stephane Eranian (10):
  perf/x86/uncore: fix initialization of cpumask
  perf/x86/uncore: add ability to customize pmu callbacks
  perf/x86/uncore: do not assume PCI fixed ctrs have more than 32 bits
  perf/x86/uncore: add PCI ids for SNB/IVB/HSW IMC
  perf/x86/uncore: make hrtimer timeout configurable per box
  perf/x86/uncore: move uncore_event_to_box() and uncore_pmu_to_box()
  perf/x86/uncore: allow more than one fixed counter per box
  perf/x86/uncore: add SNB/IVB/HSW client uncore memory controller support
  perf/x86/uncore: add hrtimer to SNB uncore IMC PMU
  perf/x86/uncore: use MiB unit for events for SNB/IVB/HSW IMC

 arch/x86/kernel/cpu/perf_event_intel_uncore.c |  550 +++++++++++++++++++++----
 arch/x86/kernel/cpu/perf_event_intel_uncore.h |   48 ++-
 include/linux/pci_ids.h                       |    3 +
 3 files changed, 513 insertions(+), 88 deletions(-)

-- 
1.7.9.5

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ