lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090728102940.GA6550@aftab>
Date:	Tue, 28 Jul 2009 12:29:40 +0200
From:	Borislav Petkov <borislav.petkov@....com>
To:	"H. Peter Anvin" <hpa@...or.com>
CC:	x86@...nel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] [x86, msr]: execute on the correct CPU subset

On Mon, Jul 27, 2009 at 01:46:26PM -0700, H. Peter Anvin wrote:
> Borislav Petkov wrote:
> >  
> >  	preempt_disable();
> > -	/*
> > -	 * FIXME: handle the CPU we're executing on separately for now until
> > -	 * smp_call_function_many has been fixed to not skip it.
> > -	 */
> >  	this_cpu = raw_smp_processor_id();
> > -	smp_call_function_single(this_cpu, __rdmsr_on_cpu, &rv, 1);
> >  
> > -	smp_call_function_many(mask, __rdmsr_on_cpu, &rv, 1);
> > +	if (cpumask_test_cpu(this_cpu, mask))
> > +		msr_func(&rv);
> > +
> > +	smp_call_function_many(mask, msr_func, &rv, 1);
> >  	preempt_enable();
> >  }
> 
> Any reason not to use get_cpu() ... put_cpu() instead?

None, patch updated.

--
From: Borislav Petkov <borislav.petkov@....com>
Date: Mon, 6 Jul 2009 16:08:34 +0200
Subject: [PATCH] [x86, msr]: execute on the correct CPU subset

rdmsr_on_cpus/wrmsr_on_cpus were erroneously executing on the current
CPU even in the case where it wasn't in the supplied bitmask. Add a
check for that and handle accordingly.

While at it, since rdmsr_on_cpus and wrmsr_on_cpus are almost identical,
fold them call into a common __rwmsr_on_cpus helper passing a function
pointer arg to the actual MSR operation.

Signed-off-by: Borislav Petkov <borislav.petkov@....com>
---
 arch/x86/lib/msr.c |   58 +++++++++++++++++++--------------------------------
 1 files changed, 22 insertions(+), 36 deletions(-)

diff --git a/arch/x86/lib/msr.c b/arch/x86/lib/msr.c
index 1440b9c..cf879f0 100644
--- a/arch/x86/lib/msr.c
+++ b/arch/x86/lib/msr.c
@@ -71,14 +71,9 @@ int wrmsr_on_cpu(unsigned int cpu, u32 msr_no, u32 l, u32 h)
 }
 EXPORT_SYMBOL(wrmsr_on_cpu);
 
-/* rdmsr on a bunch of CPUs
- *
- * @mask:       which CPUs
- * @msr_no:     which MSR
- * @msrs:       array of MSR values
- *
- */
-void rdmsr_on_cpus(const cpumask_t *mask, u32 msr_no, struct msr *msrs)
+static inline void __rwmsr_on_cpus(const cpumask_t *mask, u32 msr_no,
+				   struct msr *msrs,
+				   void (*msr_func) (void *info))
 {
 	struct msr_info rv;
 	int this_cpu;
@@ -89,16 +84,25 @@ void rdmsr_on_cpus(const cpumask_t *mask, u32 msr_no, struct msr *msrs)
 	rv.msrs	  = msrs;
 	rv.msr_no = msr_no;
 
-	preempt_disable();
-	/*
-	 * FIXME: handle the CPU we're executing on separately for now until
-	 * smp_call_function_many has been fixed to not skip it.
-	 */
-	this_cpu = raw_smp_processor_id();
-	smp_call_function_single(this_cpu, __rdmsr_on_cpu, &rv, 1);
+	this_cpu = get_cpu();
 
-	smp_call_function_many(mask, __rdmsr_on_cpu, &rv, 1);
-	preempt_enable();
+	if (cpumask_test_cpu(this_cpu, mask))
+		msr_func(&rv);
+
+	smp_call_function_many(mask, msr_func, &rv, 1);
+	put_cpu();
+}
+
+/* rdmsr on a bunch of CPUs
+ *
+ * @mask:       which CPUs
+ * @msr_no:     which MSR
+ * @msrs:       array of MSR values
+ *
+ */
+void rdmsr_on_cpus(const cpumask_t *mask, u32 msr_no, struct msr *msrs)
+{
+	__rwmsr_on_cpus(mask, msr_no, msrs, __rdmsr_on_cpu);
 }
 EXPORT_SYMBOL(rdmsr_on_cpus);
 
@@ -112,25 +116,7 @@ EXPORT_SYMBOL(rdmsr_on_cpus);
  */
 void wrmsr_on_cpus(const cpumask_t *mask, u32 msr_no, struct msr *msrs)
 {
-	struct msr_info rv;
-	int this_cpu;
-
-	memset(&rv, 0, sizeof(rv));
-
-	rv.off    = cpumask_first(mask);
-	rv.msrs   = msrs;
-	rv.msr_no = msr_no;
-
-	preempt_disable();
-	/*
-	 * FIXME: handle the CPU we're executing on separately for now until
-	 * smp_call_function_many has been fixed to not skip it.
-	 */
-	this_cpu = raw_smp_processor_id();
-	smp_call_function_single(this_cpu, __wrmsr_on_cpu, &rv, 1);
-
-	smp_call_function_many(mask, __wrmsr_on_cpu, &rv, 1);
-	preempt_enable();
+	__rwmsr_on_cpus(mask, msr_no, msrs, __wrmsr_on_cpu);
 }
 EXPORT_SYMBOL(wrmsr_on_cpus);
 
-- 
1.6.3.3


-- 
Regards/Gruss,
Boris.

Operating | Advanced Micro Devices GmbH
  System  | Karl-Hammerschmidt-Str. 34, 85609 Dornach b. München, Germany
 Research | Geschäftsführer: Thomas M. McCoy, Giuliano Meroni
  Center  | Sitz: Dornach, Gemeinde Aschheim, Landkreis München
  (OSRC)  | Registergericht München, HRB Nr. 43632

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ