lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 12 Apr 2022 17:54:14 +0000
From:   Sean Christopherson <seanjc@...gle.com>
To:     Ben Gardon <bgardon@...gle.com>
Cc:     linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
        Paolo Bonzini <pbonzini@...hat.com>,
        Peter Xu <peterx@...hat.com>, Peter Shier <pshier@...gle.com>,
        David Dunn <daviddunn@...gle.com>,
        Junaid Shahid <junaids@...gle.com>,
        Jim Mattson <jmattson@...gle.com>,
        David Matlack <dmatlack@...gle.com>,
        Mingwei Zhang <mizhang@...gle.com>,
        Jing Zhang <jingzhangos@...gle.com>
Subject: Re: [PATCH v4 07/10] KVM: x86/MMU: Allow NX huge pages to be
 disabled on a per-vm basis

On Mon, Apr 11, 2022, Ben Gardon wrote:
> @@ -6079,6 +6080,11 @@ int kvm_vm_ioctl_enable_cap(struct kvm *kvm,
>  		}
>  		mutex_unlock(&kvm->lock);
>  		break;
> +	case KVM_CAP_VM_DISABLE_NX_HUGE_PAGES:
> +		kvm->arch.disable_nx_huge_pages = true;

It's probably worth requiring cap->args[0] to be zero, KVM has been burned too many
times by lack of extensibility.

> +		kvm_update_nx_huge_pages(kvm);

Is there actually a use case to do this while the VM is running?  Given that this
is a one-way control, i.e. userspace can't re-enable the mitigation, me thinks the
answer is no.  And logically, I don't see why userspace would suddenly decide to
trust the guest at some random point in time.

So, require this to be done before vCPUs are created, then there's no need to
zap SPTEs because there can't be any SPTEs to zap.  Then the previous patch also
goes away.  Or to be really draconian, disallow the cap if memslots have been
created, though I think created_vcpus will be sufficient now and in the future.

We can always lift the restriction if someone has a use case for toggling this
while the VM is running, but we can't do the reverse.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ