lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CABgObfZ-dFnWK46pyvuaO8TKEKC5pntqa1nXm-7Cwr0rpg5a3w@mail.gmail.com>
Date: Wed, 17 Apr 2024 00:03:21 +0200
From: Paolo Bonzini <pbonzini@...hat.com>
To: boris.ostrovsky@...cle.com
Cc: kvm@...r.kernel.org, seanjc@...gle.com, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] KVM/x86: Do not clear SIPI while in SMM

On Tue, Apr 16, 2024 at 10:57 PM <boris.ostrovsky@...cle.com> wrote:
> On 4/16/24 4:53 PM, Paolo Bonzini wrote:
> > On 4/16/24 22:47, Boris Ostrovsky wrote:
> >> Keeping the SIPI pending avoids this scenario.
> >
> > This is incorrect - it's yet another ugly legacy facet of x86, but we
> > have to live with it.  SIPI is discarded because the code is supposed
> > to retry it if needed ("INIT-SIPI-SIPI").
>
> I couldn't find in the SDM/APM a definitive statement about whether SIPI
> is supposed to be dropped.

I think the manual is pretty consistent that SIPIs are never latched,
they're only ever used in wait-for-SIPI state.

> > The sender should set a flag as early as possible in the SIPI code so
> > that it's clear that it was not received; and an extra SIPI is not a
> > problem, it will be ignored anyway and will not cause trouble if
> > there's a race.
> >
> > What is the reproducer for this?
>
> Hotplugging/unplugging cpus in a loop, especially if you oversubscribe
> the guest, will get you there in 10-15 minutes.
>
> Typically (although I think not always) this is happening when OVMF if
> trying to rendezvous and a processor is missing and is sent an extra SMI.

Can you go into more detail? I wasn't even aware that OVMF's SMM
supported hotplug - on real hardware I think there's extra work from
the BMC to coordinate all SMIs across both existing and hotplugged
packages(*)

What should happen is that SMIs are blocked on the new CPUs, so that
only existing CPUs answer. These restore the 0x30000 segment to
prepare for the SMI on the new CPUs, and send an INIT-SIPI to start
the SMI on the new CPUs. Does OVMF do anything like that?

Paolo


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ