lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20230213190754.1836051-4-kan.liang@linux.intel.com>
Date:   Mon, 13 Feb 2023 11:07:48 -0800
From:   kan.liang@...ux.intel.com
To:     tglx@...utronix.de, jstultz@...gle.com, peterz@...radead.org,
        mingo@...hat.com, linux-kernel@...r.kernel.org
Cc:     sboyd@...nel.org, eranian@...gle.com, namhyung@...nel.org,
        ak@...ux.intel.com, adrian.hunter@...el.com,
        Kan Liang <kan.liang@...ux.intel.com>,
        Ravi Bangoria <ravi.bangoria@....com>
Subject: [RFC PATCH V2 3/9] perf/x86: Factor out x86_pmu_sample_preload()

From: Kan Liang <kan.liang@...ux.intel.com>

Some common sample data are preloaded on X86 platforms before the sample
output. For example, the branch stack information.
Factor out a generic x86_pmu_sample_preload().

It will also be used later to preload the common HW time, TSC.

Signed-off-by: Kan Liang <kan.liang@...ux.intel.com>
Cc: Ravi Bangoria <ravi.bangoria@....com>
---
 arch/x86/events/amd/core.c   | 3 +--
 arch/x86/events/core.c       | 5 +----
 arch/x86/events/intel/core.c | 3 +--
 arch/x86/events/intel/ds.c   | 3 +--
 arch/x86/events/perf_event.h | 8 ++++++++
 5 files changed, 12 insertions(+), 10 deletions(-)

diff --git a/arch/x86/events/amd/core.c b/arch/x86/events/amd/core.c
index 8c45b198b62f..af7b3977efa8 100644
--- a/arch/x86/events/amd/core.c
+++ b/arch/x86/events/amd/core.c
@@ -928,8 +928,7 @@ static int amd_pmu_v2_handle_irq(struct pt_regs *regs)
 		if (!x86_perf_event_set_period(event))
 			continue;
 
-		if (has_branch_stack(event))
-			perf_sample_save_brstack(&data, event, &cpuc->lbr_stack);
+		x86_pmu_sample_preload(&data, event, cpuc);
 
 		if (perf_event_overflow(event, &data, regs))
 			x86_pmu_stop(event, 0);
diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
index 85a63a41c471..b19ac54ebeea 100644
--- a/arch/x86/events/core.c
+++ b/arch/x86/events/core.c
@@ -1703,10 +1703,7 @@ int x86_pmu_handle_irq(struct pt_regs *regs)
 
 		perf_sample_data_init(&data, 0, event->hw.last_period);
 
-		if (has_branch_stack(event)) {
-			data.br_stack = &cpuc->lbr_stack;
-			data.sample_flags |= PERF_SAMPLE_BRANCH_STACK;
-		}
+		x86_pmu_sample_preload(&data, event, cpuc);
 
 		if (perf_event_overflow(event, &data, regs))
 			x86_pmu_stop(event, 0);
diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
index 14f0a746257d..d9be5701e60a 100644
--- a/arch/x86/events/intel/core.c
+++ b/arch/x86/events/intel/core.c
@@ -3036,8 +3036,7 @@ static int handle_pmi_common(struct pt_regs *regs, u64 status)
 
 		perf_sample_data_init(&data, 0, event->hw.last_period);
 
-		if (has_branch_stack(event))
-			perf_sample_save_brstack(&data, event, &cpuc->lbr_stack);
+		x86_pmu_sample_preload(&data, event, cpuc);
 
 		if (perf_event_overflow(event, &data, regs))
 			x86_pmu_stop(event, 0);
diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c
index 7980e92dec64..2f59573ed463 100644
--- a/arch/x86/events/intel/ds.c
+++ b/arch/x86/events/intel/ds.c
@@ -1742,8 +1742,7 @@ static void setup_pebs_fixed_sample_data(struct perf_event *event,
 	if (x86_pmu.intel_cap.pebs_format >= 3)
 		setup_pebs_time(event, data, pebs->tsc);
 
-	if (has_branch_stack(event))
-		perf_sample_save_brstack(data, event, &cpuc->lbr_stack);
+	x86_pmu_sample_preload(data, event, cpuc);
 }
 
 static void adaptive_pebs_save_regs(struct pt_regs *regs,
diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h
index d6de4487348c..ae6ec58fde14 100644
--- a/arch/x86/events/perf_event.h
+++ b/arch/x86/events/perf_event.h
@@ -1185,6 +1185,14 @@ int x86_pmu_handle_irq(struct pt_regs *regs);
 void x86_pmu_show_pmu_cap(int num_counters, int num_counters_fixed,
 			  u64 intel_ctrl);
 
+static inline void x86_pmu_sample_preload(struct perf_sample_data *data,
+					  struct perf_event *event,
+					  struct cpu_hw_events *cpuc)
+{
+	if (has_branch_stack(event))
+		perf_sample_save_brstack(data, event, &cpuc->lbr_stack);
+}
+
 extern struct event_constraint emptyconstraint;
 
 extern struct event_constraint unconstrained;
-- 
2.35.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ