lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue,  4 Jun 2019 18:54:45 +0800
From:   Mao Han <han_mao@...ky.com>
To:     linux-kernel@...r.kernel.org
Cc:     Mao Han <han_mao@...ky.com>, linux-csky@...r.kernel.org,
        Guo Ren <guoren@...nel.org>
Subject: [PATCH V5 2/6] csky: Add count-width property for csky pmu

The csky pmu counter may have different io width. When the counter is
smaller then 64 bits and counter value is smaller than the old value, it
will result to a extremely large delta value. So the sampled value should
be extend to 64 bits to avoid this, the extension bits base on the
count-width property from dts.

Signed-off-by: Mao Han <han_mao@...ky.com>
Cc: Guo Ren <guoren@...nel.org>
---
 arch/csky/kernel/perf_event.c | 15 ++++++++++++++-
 1 file changed, 14 insertions(+), 1 deletion(-)

diff --git a/arch/csky/kernel/perf_event.c b/arch/csky/kernel/perf_event.c
index 2282554..a15b397 100644
--- a/arch/csky/kernel/perf_event.c
+++ b/arch/csky/kernel/perf_event.c
@@ -9,6 +9,7 @@
 #include <linux/platform_device.h>
 
 #define CSKY_PMU_MAX_EVENTS 32
+#define DEFAULT_COUNT_WIDTH 48
 
 #define HPCR		"<0, 0x0>"	/* PMU Control reg */
 #define HPCNTENR	"<0, 0x4>"	/* Count Enable reg */
@@ -18,6 +19,7 @@ static void (*hw_raw_write_mapping[CSKY_PMU_MAX_EVENTS])(uint64_t val);
 
 struct csky_pmu_t {
 	struct pmu	pmu;
+	uint32_t	count_width;
 	uint32_t	hpcr;
 } csky_pmu;
 
@@ -804,7 +806,12 @@ static void csky_perf_event_update(struct perf_event *event,
 				   struct hw_perf_event *hwc)
 {
 	uint64_t prev_raw_count = local64_read(&hwc->prev_count);
-	uint64_t new_raw_count = hw_raw_read_mapping[hwc->idx]();
+	/*
+	 * Sign extend count value to 64bit, otherwise delta calculation
+	 * would be incorrect when overflow occurs.
+	 */
+	uint64_t new_raw_count = sign_extend64(
+		hw_raw_read_mapping[hwc->idx](), csky_pmu.count_width - 1);
 	int64_t delta = new_raw_count - prev_raw_count;
 
 	/*
@@ -1032,6 +1039,7 @@ int init_hw_perf_events(void)
 int csky_pmu_device_probe(struct platform_device *pdev,
 			  const struct of_device_id *of_table)
 {
+	struct device_node *node = pdev->dev.of_node;
 	int ret;
 
 	ret = init_hw_perf_events();
@@ -1040,6 +1048,11 @@ int csky_pmu_device_probe(struct platform_device *pdev,
 		return ret;
 	}
 
+	if (of_property_read_u32(node, "count-width",
+				 &csky_pmu.count_width)) {
+		csky_pmu.count_width = DEFAULT_COUNT_WIDTH;
+	}
+
 	return ret;
 }
 
-- 
2.7.4

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ