[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1483739814-23000-2-git-send-email-vikas.shivappa@linux.intel.com>
Date: Fri, 6 Jan 2017 13:56:43 -0800
From: Vikas Shivappa <vikas.shivappa@...ux.intel.com>
To: vikas.shivappa@...el.com, vikas.shivappa@...ux.intel.com
Cc: linux-kernel@...r.kernel.org, x86@...nel.org, hpa@...or.com,
tglx@...utronix.de, mingo@...nel.org, peterz@...radead.org,
ravi.v.shankar@...el.com, tony.luck@...el.com,
fenghua.yu@...el.com, andi.kleen@...el.com, h.peter.anvin@...el.com
Subject: [PATCH 01/12] Documentation, x86/cqm: Intel Resource Monitoring Documentation
Add documentation of usage of cqm and mbm events using perf interface
and examples.
Signed-off-by: Vikas Shivappa <vikas.shivappa@...ux.intel.com>
---
Documentation/x86/intel_rdt_mon_ui.txt | 62 ++++++++++++++++++++++++++++++++++
1 file changed, 62 insertions(+)
create mode 100644 Documentation/x86/intel_rdt_mon_ui.txt
diff --git a/Documentation/x86/intel_rdt_mon_ui.txt b/Documentation/x86/intel_rdt_mon_ui.txt
new file mode 100644
index 0000000..881fa58
--- /dev/null
+++ b/Documentation/x86/intel_rdt_mon_ui.txt
@@ -0,0 +1,62 @@
+User Interface for Resource Monitoring in Intel Resource Director Technology
+
+Vikas Shivappa<vikas.shivappa@...el.com>
+David Carrillo-Cisneros<davidcc@...gle.com>
+Stephane Eranian <eranian@...gle.com>
+
+This feature is enabled by the CONFIG_INTEL_RDT_M Kconfig and the
+X86 /proc/cpuinfo flag bits cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local
+
+Resource Monitoring
+-------------------
+Resource Monitoring includes cqm(cache quality monitoring) and
+mbm(memory bandwidth monitoring) and uses the perf interface. A light
+weight interface to enable monitoring without perf is enabled as well.
+
+CQM provides OS/VMM a way to monitor llc occupancy. It measures the
+amount of L3 cache fills per task or cgroup.
+
+MBM provides OS/VMM a way to monitor bandwidth from one level of cache
+to another. The current patches support L3 external bandwidth
+monitoring. It supports both 'local bandwidth' and 'total bandwidth'
+monitoring for the socket. Local bandwidth measures the amount of data
+sent through the memory controller on the socket and total b/w measures
+the total system bandwidth.
+
+To check the monitoring events enabled:
+
+$ ./tools/perf/perf list | grep -i cqm
+intel_cqm/llc_occupancy/ [Kernel PMU event]
+intel_cqm/local_bytes/ [Kernel PMU event]
+intel_cqm/total_bytes/ [Kernel PMU event]
+
+Monitoring tasks and cgroups using perf
+---------------------------------------
+Monitoring tasks and cgroup is like using any other perf event.
+
+$perf stat -I 1000 -e intel_cqm_llc/local_bytes/ -p PID1
+
+This will monitor the local_bytes event of the PID1 and report once
+every 1000ms
+
+$mkdir /sys/fs/cgroup/perf_event/p1
+$echo PID1 > /sys/fs/cgroup/perf_event/p1/tasks
+$echo PID2 > /sys/fs/cgroup/perf_event/p1/tasks
+
+$perf stat -I 1000 -e intel_cqm_llc/llc_occupancy/ -a -G p1
+
+This will monitor the llc_occupancy event of the perf cgroup p1 in
+interval mode.
+
+Hierarchical monitoring should work just like other events and users can
+also monitor a task with in a cgroup and the cgroup together, or
+different cgroups in the same hierarchy can be monitored together.
+
+The events are associated with RMIDs and are grouped when optimal. The
+RMIDs are limited hardware resources and if runout the events would just
+throw error on read.
+
+To obtain per package data for cgroups(package x) provide any cpu in the
+package as input to -C:
+
+$perf stat -I 1000 -e intel_cqm_llc/llc_occupancy/ -C <cpu_y on package_x> -G p1
--
1.9.1
Powered by blists - more mailing lists