lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1517243373-355481-4-git-send-email-kan.liang@linux.intel.com>
Date:   Mon, 29 Jan 2018 08:29:31 -0800
From:   kan.liang@...ux.intel.com
To:     peterz@...radead.org, mingo@...hat.com,
        linux-kernel@...r.kernel.org
Cc:     acme@...nel.org, tglx@...utronix.de, jolsa@...hat.com,
        eranian@...gle.com, ak@...ux.intel.com,
        Kan Liang <kan.liang@...ux.intel.com>
Subject: [PATCH V3 3/5] perf/x86/intel/ds: introduce read function for large pebs

From: Kan Liang <kan.liang@...ux.intel.com>

When the PEBS interrupt threshold is larger than one, there is no way to
get exact auto-reload times and value, which needed for event update
unless flush the PEBS buffer.

Introduce intel_pmu_large_pebs_read() to drain the PEBS buffer in event
read when large PEBS is enabled.
To prevent the race, the drain_pebs() only be called when the PMU is
disabled.

Unconditionally call x86_perf_event_update() for large pebs.
- It is easily to call pmu::read() twice in a short period. There could
  be no samples in the PEBS buffer. x86_perf_event_update() is needed
  to update the count.
- There is no harmful to call x86_perf_event_update() for other cases.
- It's safe. Don't need to worry about the auto-reload. Because the PMU
  is disabled.

Signed-off-by: Kan Liang <kan.liang@...ux.intel.com>
---
 arch/x86/events/intel/ds.c   | 16 ++++++++++++++++
 arch/x86/events/perf_event.h |  2 ++
 2 files changed, 18 insertions(+)

diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c
index 6533426..1c11fa2 100644
--- a/arch/x86/events/intel/ds.c
+++ b/arch/x86/events/intel/ds.c
@@ -1303,6 +1303,22 @@ get_next_pebs_record_by_bit(void *base, void *top, int bit)
 	return NULL;
 }
 
+int intel_pmu_large_pebs_read(struct perf_event *event)
+{
+	struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
+
+	/* Check if the event has large pebs */
+	if (!pebs_needs_sched_cb(cpuc))
+		return 0;
+
+	perf_pmu_disable(event->pmu);
+	intel_pmu_drain_pebs_buffer();
+	x86_perf_event_update(event);
+	perf_pmu_enable(event->pmu);
+
+	return 1;
+}
+
 /*
  * Specific intel_pmu_save_and_restart() for auto-reload.
  * It only be called from drain_pebs().
diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h
index 805400b..7d3cd32 100644
--- a/arch/x86/events/perf_event.h
+++ b/arch/x86/events/perf_event.h
@@ -923,6 +923,8 @@ void intel_pmu_pebs_disable_all(void);
 
 void intel_pmu_pebs_sched_task(struct perf_event_context *ctx, bool sched_in);
 
+int intel_pmu_large_pebs_read(struct perf_event *event);
+
 void intel_ds_init(void);
 
 void intel_pmu_lbr_sched_task(struct perf_event_context *ctx, bool sched_in);
-- 
2.7.4

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ