lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.20.1703281904301.3616@nanos>
Date:   Tue, 28 Mar 2017 19:23:09 +0200 (CEST)
From:   Thomas Gleixner <tglx@...utronix.de>
To:     Kan Liang <kan.liang@...el.com>
cc:     peterz@...radead.org, mingo@...hat.com,
        linux-kernel@...r.kernel.org, bp@...en8.de, acme@...nel.org,
        eranian@...gle.com, jolsa@...nel.org, ak@...ux.intel.com
Subject: Re: [PATCH V3 1/2] x86/msr: add msr_set/clear_bit_on_cpu/cpus access
 functions

On Tue, 28 Mar 2017, Thomas Gleixner wrote:
> On Mon, 27 Mar 2017, kan.liang@...el.com wrote:
> 
> > From: Kan Liang <Kan.liang@...el.com>
> > 
> > To flip a MSR bit on many CPUs or specific CPU, currently it has to do
> > read-modify-write operation on the MSR through rd/wrmsr_on_cpu(s).
> > It actually sends two IPIs to the given CPU.
> 
> The IPIs are the least of the problems, really. The real problem is that
> 
>        rdmsr_on_cpu()
>        wrmsr_on_cpu()
> 
> is not atomic. That's what wants to be solved. The reduction of IPIs just a
> side effect.
> 
> >  #else  /*  CONFIG_SMP  */
> > +static inline int msr_set_bit_on_cpu(unsigned int cpu, u32 msr, u8 bit)
> > +{
> > +	return msr_set_bit(msr, bit);
> > +}
> > +
> > +static inline int msr_clear_bit_on_cpu(unsigned int cpu, u32 msr, u8 bit)
> > +{
> > +	return msr_clear_bit(msr, bit);
> > +}
> > +
> > +static inline void msr_set_bit_on_cpus(const struct cpumask *mask, u32 msr, u8 bit)
> > +{
> > +	msr_set_bit(msr, bit);
> > +}
> > +
> > +static inline void msr_clear_bit_on_cpus(const struct cpumask *mask, u32 msr, u8 bit)
> > +{
> > +	msr_clear_bit(msr, bit);
> > +}
> 
> This is utter crap because it's fundamentaly different from the SMP
> version.
> 
> msr_set/clear_bit() are not protected by anyhting. And in your call site
> this is invoked from fully preemptible context. What protects against
> context switch and interrupts fiddling with DEBUGMSR?

And thinking more about that whole interface. It's just overkill.

diff --git a/arch/x86/lib/msr.c b/arch/x86/lib/msr.c
index d1dee753b949..35763927adaa 100644
--- a/arch/x86/lib/msr.c
+++ b/arch/x86/lib/msr.c
@@ -58,7 +58,7 @@ int msr_write(u32 msr, struct msr *m)
 	return wrmsrl_safe(msr, m->q);
 }
 
-static inline int __flip_bit(u32 msr, u8 bit, bool set)
+int msr_flip_bit(u32 msr, u8 bit, bool set)
 {
 	struct msr m, m1;
 	int err = -EINVAL;
@@ -85,6 +85,7 @@ static inline int __flip_bit(u32 msr, u8 bit, bool set)
 
 	return 1;
 }
+EXPORT_SYMBOL_GPL(msr_flip_bit);
 
 /**
  * Set @bit in a MSR @msr.
@@ -96,7 +97,7 @@ static inline int __flip_bit(u32 msr, u8 bit, bool set)
  */
 int msr_set_bit(u32 msr, u8 bit)
 {
-	return __flip_bit(msr, bit, true);
+	return msr_flip_bit(msr, bit, true);
 }
 
 /**
@@ -109,7 +110,7 @@ int msr_set_bit(u32 msr, u8 bit)
  */
 int msr_clear_bit(u32 msr, u8 bit)
 {
-	return __flip_bit(msr, bit, false);
+	return msr_flip_bit(msr, bit, false);
 }
 
 #ifdef CONFIG_TRACEPOINTS

And in the driver:

static void flip_smm_bit(void *data)
{
	int val = *(int *)data;
	
	msr_flip_bit(DEBUGMSR, SMMBIT, val);
}

And in the write function:

       smp_call_function(flip_smm_bit, &val, 1);

That avoids all the extra interfaces and requires less code and less
text foot print when unused .....

Thanks,

	tglx

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ