lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <f019cc1f-daa8-869b-6c06-0e2586cdf0a8@redhat.com>
Date:   Tue, 23 Jun 2020 02:52:58 +0200
From:   Paolo Bonzini <pbonzini@...hat.com>
To:     Andy Lutomirski <luto@...nel.org>,
        Tom Lendacky <thomas.lendacky@....com>,
        Mohammed Gamal <mgamal@...hat.com>, kvm@...r.kernel.org
Cc:     linux-kernel@...r.kernel.org, vkuznets@...hat.com,
        sean.j.christopherson@...el.com, wanpengli@...cent.com,
        jmattson@...gle.com, joro@...tes.org, babu.moger@....com
Subject: Re: [PATCH v2 00/11] KVM: Support guest MAXPHYADDR < host MAXPHYADDR

On 23/06/20 01:47, Andy Lutomirski wrote:
> I believe that Xen does this.  Linux does not.)  For a guest to
> actually be functional in this case, the guest needs to make sure
> that it is not setting bits that are not, in fact, reserved on the
> CPU.  This means the guest needs to check MAXPHYADDR and do something
> different on different CPUs.
> 
> Do such guests exist?

I don't know; at least KVM does it too when EPT is disabled, though.  It
tries to minimize the effect of this issue by preferring bit 51, but
this does not help if the host MAXPHYADDR is 52.

> As far as I know, Xen is busted on systems
> with unusually large MAXPHYADDR regardless of any virtualization
> issues, so, at best, this series would make Xen, running as a KVM
> guest, work better on new hardware than it does running bare metal on
> that hardware.  This seems like an insufficient justification for a
> performance-eating series like this.
> 
> And, unless I've misunderstood, this will eat performance quite
> badly. Linux guests [0] (and probably many other guests), in quite a
> few workloads, is fairly sensitive to the performance of ordinary 
> write-protect or not-present faults.  Promoting these to VM exits 
> because you want to check for bits above the guest's MAXPHYADDR is
> going to hurt.

The series needs benchmarking indeed, however note that the vmexits do
not occur for not-present faults.  QEMU sets a fixed MAXPHYADDR of 40
but that is generally a bad idea and several distros change that to just
use host MAXPHYADDR instead (which would disable the new code).

> (Also, I'm confused.  Wouldn't faults like this be EPT/NPT
> violations, not page faults?)

Only if the pages are actually accessible.  Otherwise, W/U/F faults
would prevail over the RSVD fault.  Tom is saying that there's no
architectural promise that RSVD faults prevail, either, so that would
remove the need to trap #PF.

Paolo

> --Andy
> 
> 
> [0] From rather out-of-date memory, Linux doesn't make as much use
> as one might expect of the A bit.  Instead it uses minor faults.
> Ouch.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ