lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090520144905.GA27991@aftab>
Date:	Wed, 20 May 2009 16:49:05 +0200
From:	Borislav Petkov <borislav.petkov@....com>
To:	"H. Peter Anvin" <hpa@...or.com>
CC:	akpm@...ux-foundation.org, greg@...ah.com, mingo@...e.hu,
	norsk5@...oo.com, tglx@...utronix.de, mchehab@...hat.com,
	aris@...hat.com, edt@....ca, linux-kernel@...r.kernel.org,
	Andreas Herrmann <andreas.herrmann3@....com>
Subject: Re: [PATCH 01/22] x86: add methods for writing of an MSR on
	several CPUs

Hi,

On Tue, May 19, 2009 at 10:18:33PM -0700, H. Peter Anvin wrote:
> Borislav Petkov wrote:
> > +
> > +/* rdmsr on a bunch of CPUs
> > + *
> > + * @mask:	which CPUs
> > + * @msr_no:	which MSR
> > + * @msrs:	array of MSR values
> > + *
> > + * Returns:
> > + * 0 - success
> > + * <0 - read failed on at least one CPU (latter in the mask)
> > + */
> > +int rdmsr_on_cpus(const cpumask_t *mask, u32 msr_no, struct msr *msrs)
> > +{
> > +	struct msr *reg;
> > +	int cpu, tmp, err = 0;
> > +	int off = cpumask_first(mask);
> > +
> > +	for_each_cpu(cpu, mask) {
> > +		reg = &msrs[cpu - off];
> > +
> > +		tmp = rdmsr_on_cpu(cpu, msr_no, &reg->l, &reg->h);
> > +		if (tmp)
> > +			err = tmp;
> > +	}
> > +	return err;
> > +}
> > +EXPORT_SYMBOL(rdmsr_on_cpus);
> > +
> > +/*
> > + * wrmsr of a bunch of CPUs
> > + *
> > + * @mask:	which CPUs
> > + * @msr_no:	which MSR
> > + * @msrs:	array of MSR values
> > +  *
> > + * Returns:
> > + * 0 - success
> > + * <0 - write failed on at least one CPU (latter in the mask)
> > + */
> > +int wrmsr_on_cpus(const cpumask_t *mask, u32 msr_no, struct msr *msrs)
> > +{
> > +	struct msr reg;
> > +	int cpu, tmp, err = 0;
> > +	int off = cpumask_first(mask);
> > +
> > +	for_each_cpu(cpu, mask) {
> > +		reg = msrs[cpu - off];
> > +
> > +		tmp = wrmsr_on_cpu(cpu, msr_no, reg.l, reg.h);
> > +		if (tmp)
> > +			err = tmp;
> > +	}
> > +	return err;
> > +}
> > +EXPORT_SYMBOL(wrmsr_on_cpus);
> > +
> 
> Okay, now I'm *really* confused.
> 
> I thought the whole point of these functions was to allow these MSR
> references to take place in parallel, as opposed to doing outcalls to
> each CPU in order... but that's exactly what these functions do.
> 
> So what was the point of them again?

We currently need them for enabling the NB error reporting bank over
MCG_CTL on each core on the node. The question is whether we really need
the concurrency when accessing an MSR on several cores. With MCG_CTL,
BKDG says "It is expected that this register is programmed to the same
value in all nodes," but nothing concerning concurrency.

But you're right, if this interface is supposed to be generic enough,
it is probably wise to access an MSR concurrently. I could imagine
an obscure case where this is required. However, is sending IPIs
(smp_call_function_many) guaranteeing the needed concurrency? Or, should
it be more like how the mtrr code jumps through hoops (set_mtrr())
in order to ensure that _ALL_ registers have been written _before_
continuing?

Opinions? Flames?

-- 
Regards/Gruss,
Boris.

Operating | Advanced Micro Devices GmbH
  System  | Karl-Hammerschmidt-Str. 34, 85609 Dornach b. München, Germany
 Research | Geschäftsführer: Thomas M. McCoy, Giuliano Meroni
  Center  | Sitz: Dornach, Gemeinde Aschheim, Landkreis München
  (OSRC)  | Registergericht München, HRB Nr. 43632

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ