lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.21.1809080839110.1402@nanos.tec.linutronix.de>
Date:   Sat, 8 Sep 2018 08:45:09 +0200 (CEST)
From:   Thomas Gleixner <tglx@...utronix.de>
To:     Jiri Kosina <jikos@...nel.org>
cc:     Ingo Molnar <mingo@...hat.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Josh Poimboeuf <jpoimboe@...hat.com>,
        Andrea Arcangeli <aarcange@...hat.com>,
        "Woodhouse, David" <dwmw@...zon.co.uk>,
        Andi Kleen <ak@...ux.intel.com>,
        Tim Chen <tim.c.chen@...ux.intel.com>,
        "Schaufler, Casey" <casey.schaufler@...el.com>,
        linux-kernel@...r.kernel.org, x86@...nel.org
Subject: Re: [PATCH v4 2/2] x86/speculation: Enable cross-hyperthread spectre
 v2 STIBP mitigation

On Fri, 7 Sep 2018, Jiri Kosina wrote:

> On Fri, 7 Sep 2018, Thomas Gleixner wrote:
> 
> > > + * The read-modify-write of the MSR doesn't need any race protection here,
> > > + * as we're running in atomic context.
> > > + */
> > > +static void enable_stibp(void *info)
> > > +{
> > > +	u64 mask;
> > > +	rdmsrl(MSR_IA32_SPEC_CTRL, mask);
> > > +	mask |= SPEC_CTRL_STIBP;
> > > +	wrmsrl(MSR_IA32_SPEC_CTRL, mask);
> > > +}
> > 
> > You need to write x86_spec_ctrl_base 
> 
> Agreed (Josh pointed that out as well) -- otherwise this gets lost during 
> VMEXIT at least.

It's not only virt. The firmware crap as well.

> > so again. x86_spec_ctrl_base needs the STIPB bit update when the
> > enable/disable changes.
> 
> Agreed, but that's not sufficient. We need to update the MSR value on 
> primary threads as well (which we do in sysfs store path), so this is 
> merely an optimization so that we don't do it pointlessly twice on 
> siblings.

No. It's an correctness issue. If after changing the SMT to enable a normal
hotplug operation happens then you need to update the MSR as well.

> > If you do that _before_ the loop then you spare 50% IPIs again because the
> > siblings will do the right thing via x86_spec_ctrl_base.
> 
> So I will go through the whole codepath again, but I fear your suggestion 
> would not work -- see the check for cpu_smt_control in stibp_needed(). We 
> need to see the old (or new, depending on the direction of the transition) 
> value of cpu_smt_contol, which will break if we move 
> arch_smt_enable_errata() (and thus the check).

That's bogus. The arch_smt_control(enable) function does not need to look
at the SMT control state at all. The caller hands the not yet populated new
state in and that's enough to make the decision whether to set or clear the
bit in x86_spec_ctrl_base.

Thanks,

	tglx

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ