lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 17 May 2007 19:18:09 -0700 (PDT)
From:	Davide Libenzi <davidel@...ilserver.org>
To:	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
cc:	Benjamin LaHaise <bcrl@...hat.com>, Andi Kleen <ak@...e.de>
Subject: [patch] fix oprofile SMP ...


This fixed oprofile being broken on x86-64 SMP. The current reservation 
runs on all CPUs, but the ones following the first, will fail since the 
reservation bitmap has an 1 in the MSR entry.
The reservation code should really run once, and their addresses used on 
other CPUs. There are other solutions to this, but this is the simplest 
and shorter one that came in my mind. Tested in my dual Opteron 252, and 
working fine here.
Andi, if you don't like both Ben and my patch, would you mind fixing 
oprofile yourself in some way?



Signed-off-by: Davide Libenzi <davidel@...ilserver.org>



- Davide



Index: linux-2.6.22-rc1-git6.oprof/arch/i386/oprofile/nmi_int.c
===================================================================
--- linux-2.6.22-rc1-git6.oprof.orig/arch/i386/oprofile/nmi_int.c	2007-05-17 18:32:48.000000000 -0700
+++ linux-2.6.22-rc1-git6.oprof/arch/i386/oprofile/nmi_int.c	2007-05-17 18:51:28.000000000 -0700
@@ -130,8 +130,19 @@
 static void nmi_save_registers(void * dummy)
 {
 	int cpu = smp_processor_id();
-	struct op_msrs * msrs = &cpu_msrs[cpu];
-	model->fill_in_addresses(msrs);
+	struct op_msrs *msrs = &cpu_msrs[cpu];
+
+	/*
+	 * Use the CPU#0 already allocated addresses as reference.
+	 */
+	if (cpu != 0) {
+		const struct op_msrs *rmsrs = &cpu_msrs[0];
+
+		memcpy(msrs->counters, rmsrs->counters,
+		       model->num_counters * sizeof(msrs->counters[0]));
+		memcpy(msrs->controls, rmsrs->controls,
+		       model->num_controls * sizeof(msrs->controls[0]));
+	}
 	nmi_cpu_save_registers(msrs);
 }
 
@@ -195,6 +206,7 @@
 static int nmi_setup(void)
 {
 	int err=0;
+	struct op_msrs *msrs;
 
 	if (!allocate_msrs())
 		return -ENOMEM;
@@ -203,6 +215,14 @@
 		free_msrs();
 		return err;
 	}
+	/*
+	 * MSR reservation should happen only once with the current API,
+	 * otherwise CPU other then the first would fail to allocate.
+	 * We do it here, and then we propagate the CPU#0 addresses to
+	 * other CPUs.
+	 */
+	msrs = &cpu_msrs[0];
+	model->fill_in_addresses(msrs);
 
 	/* We need to serialize save and setup for HT because the subset
 	 * of msrs are distinct for save and setup operations
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ