lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20161205205728.GB7972@potion>
Date:   Mon, 5 Dec 2016 21:57:28 +0100
From:   Radim Krčmář <rkrcmar@...hat.com>
To:     David Hildenbrand <david@...hat.com>
Cc:     linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
        Paolo Bonzini <pbonzini@...hat.com>,
        Igor Mammedov <imammedo@...hat.com>
Subject: Re: [PATCH 4/4] KVM: x86: allow hotplug of VCPU with APIC ID over
 0xff

2016-12-05 19:00+0100, David Hildenbrand:
> Am 05.12.2016 um 17:02 schrieb Radim Krčmář:
>> 2016-12-05 15:37+0100, David Hildenbrand:
>> > Am 02.12.2016 um 20:44 schrieb Radim Krčmář:
>> > > LAPIC after reset is in xAPIC mode, which poses a problem for hotplug of
>> > > VCPUs with high APIC ID, because reset VCPU is waiting for INIT/SIPI,
>> > > but there is no way to uniquely address it using xAPIC.
>> > > 
>> > > From many possible options, we chose the one that also works on real
>> > > hardware: accepting interrupts addressed to LAPIC's x2APIC ID even in
>> > > xAPIC mode.
>> > > 
>> > > KVM intentionally differs from real hardware, because real hardware
>> > > (Knights Landing) does just "x2apic_id & 0xff" to decide whether to
>> > > accept the interrupt in xAPIC mode and it can deliver one interrupt to
>> > > more than one physical destination, e.g. 0x123 to 0x123 and 0x23.
>> > > 
>> > > Add a capability to let userspace know that we do something now.
>> > 
>> > Should we allow user space to turn it on/off for compatibility handling? Or
>> > do we just not care?
>> 
>> There should be no guest that relies on the previous behavior, so I'd
>> forgo the toggle, because it would be extra conditions in the code.
>> I'd add it as a flag to KVM_CAP_X2APIC_API if you have reasons to let
>> userspace choose.
> 
> Okay I see. So if existing user space/guests don't break, there is no reason
> to make it configurable. I was just not sure if user space might want to
> decide whether to act "the old way".

I also don't see a reason for userspace to want it disabled -- it just
shouldn't matter even if userspace implements another solution (e.g. it
hotplugs VCPUs in x2APIC mode) or KVM ends up with a better solution.
Any change can break some guest, but I couldn't with anything reasonable
that would be broken.

>> >                      (or how will this capability be used later on?)
>> 
>> New userspace should check this capability and disable hotplug of VCPUs
>> with id over 255 if KVM doesn't support it.
>> 
> 
> Wonder if this is actually a bugfix for allowing KVM_MAX_VCPU_ID to
> be > 255. Currently it is somewhat like

Good point, it is, for guests that want hotplug.  I'll add Fixes: line;
thanks!

> "yes, I support VCPU ids with > 255, but no, you can't really hotplug
> such CPUs".

My bad, offline/online in Linux worked fine so I didn't think enough
about hotplug.

> (fix for older kernels would then be limiting the VCPU ID to 255 and
> not introducing a new capability).
> 
> But I am no expert on that topic, so feel free to ignore.

I think the agreement is to embrace compatibility, so we pile new
mistakes to hide known ones.
(Rewriting the past requires far more power than accepting it:
 If we didn't force unfixed kernels out of existence, then userspace
 couldn't tell if hotplug up to high VCPU ID limit is supported.)

> The general idea of this patch makes sense to me (x2apic hack)!

The situation would be a bit better if xAPIC ID was read-only (we'd
behave more like real-hardware then), but no major OS changes the ID,
which makes it a secondary concern with weird corner-cases.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ