lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 06 Jun 2023 09:20:10 +0200
From:   Thomas Gleixner <tglx@...utronix.de>
To:     Sean Christopherson <seanjc@...gle.com>
Cc:     LKML <linux-kernel@...r.kernel.org>, x86@...nel.org,
        Ashok Raj <ashok.raj@...ux.intel.com>,
        Dave Hansen <dave.hansen@...ux.intel.com>,
        Tony Luck <tony.luck@...el.com>,
        Arjan van de Veen <arjan@...ux.intel.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Eric Biederman <ebiederm@...ssion.com>
Subject: Re: [patch 0/6] Cure kexec() vs. mwait_play_dead() troubles

On Mon, Jun 05 2023 at 16:08, Sean Christopherson wrote:
> On Tue, Jun 06, 2023, Thomas Gleixner wrote:
>> On Mon, Jun 05 2023 at 10:41, Sean Christopherson wrote:
>> > On Sat, Jun 03, 2023, Thomas Gleixner wrote:
>> >> This is only half safe because HLT can resume execution due to NMI, SMI and
>> >> MCE. Unfortunately there is no real safe mechanism to "park" a CPU reliably,
>> >
>> > On Intel.  On AMD, enabling EFER.SVME and doing CLGI will block everything except
>> > single-step #DB (lol) and RESET.  #MC handling is implementation-dependent and
>> > *might* cause shutdown, but at least there's a chance it will work.  And presumably
>> > modern CPUs do pend the #MC until GIF=1.
>> 
>> Abusing SVME for that is definitely in the realm of creative bonus
>> points, but not necessarily a general purpose solution.
>
> Heh, my follow-up ideas for Intel are to abuse XuCode or SEAM ;-)

I feared that :)

>> >> So parking them via INIT is not completely solving the problem, but it
>> >> takes at least NMI and SMI out of the picture.
>> >
>> > Don't most SMM handlers rendezvous all CPUs?  I.e. won't blocking SMIs indefinitely
>> > potentially cause problems too?
>> 
>> Not that I'm aware of. If so then this would be a hideous firmware bug
>> as firmware must be aware of CPUs which hang around in INIT independent
>> of this.
>
> I was thinking of the EDKII code in UefiCpuPkg/PiSmmCpuDxeSmm/MpService.c, e.g.
> SmmWaitForApArrival().  I've never dug deeply into how EDKII uses SMM, what its
> timeouts are, etc., I just remember coming across that code when poking around
> EDKII for other stuff.

There is a comment:

  Note the SMI Handlers must ALWAYS take into account the cases that not
  all APs are available in an SMI run.

Also not all SMIs required global synchronization. But it's all an
inpenetrable mess...

>> Making this work for regular kexec() including this:
>> 
>> > To avoid OOM after many kexec(), reserving a page could be done iff
>> > the current kernel wasn't itself kexec()'d.
>> 
>> would be possible and I thought about it, but that needs a complete new
>> design of "offline", "shutdown offline" and a non-trivial amount of
>> backwards compatibility magic because you can't assume that the kexec()
>> kernel version is greater or equal to the current one. kexec() is
>> supposed to work both ways, downgrading and upgrading. IOW, that ship
>> sailed long ago.
>
> Right, but doesn't gaining "full" protection require ruling out unenlightened
> downgrades?  E.g. if someone downgrades to an old kernel, doesn't hide the "offline"
> CPUs from the kexec() kernel, and boots the old kernel with -nosmt or whatever,
> then that old kernel will do the naive MWAIT or unprotected HLT and
> it's hosed again.

Of course.

> If we're relying on the admin to hide the offline CPUs, could we usurp
> an existing kernel param to hide a small chunk of memory instead?

The only "safe" place is below 1M I think. Not sure whether we have
some existing command line option to "hide" a range there. Neither am I
sure that this would be always the same range.

More questions than answers :)

Thanks

        tglx




Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ