lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 15 Dec 2021 11:10:38 +0000
From:   Mark Rutland <mark.rutland@....com>
To:     David Woodhouse <dwmw2@...radead.org>
Cc:     Thomas Gleixner <tglx@...utronix.de>,
        Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
        Dave Hansen <dave.hansen@...ux.intel.com>, x86@...nel.org,
        "H . Peter Anvin" <hpa@...or.com>,
        Paolo Bonzini <pbonzini@...hat.com>,
        "Paul E . McKenney" <paulmck@...nel.org>,
        linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
        rcu@...r.kernel.org, mimoja@...oja.de, hewenliang4@...wei.com,
        hushiyuan@...wei.com, luolongjun@...wei.com, hejingxian@...wei.com
Subject: Re: [PATCH v2 3/7] cpu/hotplug: Add dynamic parallel bringup states
 before CPUHP_BRINGUP_CPU

On Tue, Dec 14, 2021 at 08:32:29PM +0000, David Woodhouse wrote:
> On Tue, 2021-12-14 at 14:24 +0000, Mark Rutland wrote:
> > On Tue, Dec 14, 2021 at 12:32:46PM +0000, David Woodhouse wrote:
> > > From: David Woodhouse <
> > > dwmw@...zon.co.uk
> > > >
> > > 
> > > If the platform registers these states, bring all CPUs to each registered
> > > state in turn, before the final bringup to CPUHP_BRINGUP_CPU. This allows
> > > the architecture to parallelise the slow asynchronous tasks like sending
> > > INIT/SIPI and waiting for the AP to come to life.
> > > 
> > > There is a subtlety here: even with an empty CPUHP_BP_PARALLEL_DYN step,
> > > this means that *all* CPUs are brought through the prepare states and to
> > > CPUHP_BP_PREPARE_DYN before any of them are taken to CPUHP_BRINGUP_CPU
> > > and then are allowed to run for themselves to CPUHP_ONLINE.
> > > 
> > > So any combination of prepare/start calls which depend on A-B ordering
> > > for each CPU in turn, such as the X2APIC code which used to allocate a
> > > cluster mask 'just in case' and store it in a global variable in the
> > > prep stage, then potentially consume that preallocated structure from
> > > the AP and set the global pointer to NULL to be reallocated in
> > > CPUHP_X2APIC_PREPARE for the next CPU... would explode horribly.
> > > 
> > > We believe that X2APIC was the only such case, for x86. But this is why
> > > it remains an architecture opt-in. For now.
> > 
> > It might be worth elaborating with a non-x86 example, e.g.
> > 
> > >  We believe that X2APIC was the only such case, for x86. Other architectures
> > >  have similar requirements with global variables used during bringup (e.g.
> > >  `secondary_data` on arm/arm64), so architectures must opt-in for now.
> > 
> > ... so that we have a specific example of how unconditionally enabling this for
> > all architectures would definitely break things today.
> 
> I do not have such an example, and I do not know that it would
> definitely break things to turn it on for all architectures today.
> 
> The x2apic one is an example of why it *might* break random
> architectures and thus why it needs to be an architecture opt-in.

Ah; I had thought we did the `secondary_data` setup in a PREPARE step, and
hence it was a comparable example, but I was mistaken. Sorry for the noise!

> > FWIW, that's something I would like to cleanup for arm64 for general
> > robustness, and if that would make it possible for us to have parallel bringup
> > in future that would be a nice bonus.
> 
> Yes. But although I lay the groundwork here, the arch can't *actually*
> do parallel bringup without some arch-specific work, so auditing the
> pre-bringup states is the easy part. :)

Sure; that was trying to be a combination of:

* This looks nice, I'd like to use this (eventually) on arm64.

* I'm aware of some arm64-specific groundwork we need to do before arm64 can
  use this.

So I think we're agreed. :)

Thanks,
Mark.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ