lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y1bpSlNGeVkqRYxI@google.com>
Date:   Mon, 24 Oct 2022 19:36:42 +0000
From:   Sean Christopherson <seanjc@...gle.com>
To:     Vipin Sharma <vipinsh@...gle.com>
Cc:     Vitaly Kuznetsov <vkuznets@...hat.com>, pbonzini@...hat.com,
        dmatlack@...gle.com, kvm@...r.kernel.org, shujunxue@...gle.com,
        terrytaehyun@...gle.com, linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH] Add Hyperv extended hypercall support in KVM

On Mon, Oct 24, 2022, Vipin Sharma wrote:
> On Mon, Oct 24, 2022 at 8:22 AM Sean Christopherson <seanjc@...gle.com> wrote:
> >
> > On Mon, Oct 24, 2022, Vitaly Kuznetsov wrote:
> > > While some 'extended' hypercalls may indeed need to be handled in KVM,
> > > there's no harm done in forwarding all unknown-to-KVM hypercalls to
> > > userspace. The only issue I envision is how would userspace discover
> > > which extended hypercalls are supported by KVM in case it (userspace) is
> > > responsible for handling HvExtCallQueryCapabilities call which returns
> > > the list of supported hypercalls. E.g. in case we decide to implement
> > > HvExtCallMemoryHeatHint in KVM, how are we going to communicate this to
> > > userspace?
> > >
> > > Normally, VMM discovers the availability of Hyper-V features through
> > > KVM_GET_SUPPORTED_HV_CPUID but extended hypercalls are not listed in
> > > CPUID. This can be always be solved by adding new KVM CAPs of
> > > course. Alternatively, we can add a single
> > > "KVM_CAP_HYPERV_EXT_CALL_QUERY" which will just return the list of
> > > extended hypercalls supported by KVM (which Vipin's patch adds anyway to
> > > *set* the list instead).
> >
> > AIUI, the TLFS uses a 64-bit mask to enumerate which extended hypercalls are
> > supported, so a single CAP should be a perfect fit.  And KVM can use the capability
> > to enumerate support for _and_ to allow userspace to enable in-kernel handling.  E.g.
> >
> > check():
> >         case KVM_CAP_HYPERV_EXT_CALL:
> >                 return KVM_SUPPORTED_HYPERV_EXT_CALL;
> >
> >
> > enable():
> >
> >         case KVM_CAP_HYPERV_EXT_CALL:
> >                 r = -EINVAL;
> >                 if (mask & ~KVM_SUPPORTED_HYPERV_EXT_CALL)
> >                         break;
> >
> >                 mutex_lock(&kvm->lock);
> >                 if (!kvm->created_vcpus) {
> 
> Any reason for setting capability only after vcpus are created?

This only allows setting the capability _before_ vCPUs are created.  Attempting
to set the cap after vCPUs are created gets rejected with -EINVAL.  This
requirement means vCPUs don't need to take a lock to consume per-VM state, as KVM
prevents the state from changing once vCPUs are created.

> Also, in my patch I wrote the ioctl at kvm_vcpu_ioctl_enable_cap() as
> all of the hyperv related code was there but since this capability is
> a vm setting not a per vcpu setting, should this be at  kvm_vm_ioctl()
> as a better choice?

Yep!

> >                         to_kvm_hv(kvm)->ext_call = cap->args[0];
> >                         r = 0;
> >                 }
> >                 mutex_unlock(&kvm->lock);
> >
> > kvm_hv_hypercall()
> >
> >
> >         case HV_EXT_CALL_QUERY_CAPABILITIES ... HV_EXT_CALL_MAX:
> >                 if (unlikely(hc.fast)) {
> >                         ret = HV_STATUS_INVALID_PARAMETER;
> >                         break;
> >                 }
> >                 if (!(hc.code & to_kvm_hv(vcpu->kvm)->ext_call))
> 
> It won't be directly this. There will be a mapping of hc.code to the
> corresponding bit and then "&" with ext_call.
> 
> 
> >                         goto hypercall_userspace_exit;
> >
> >                 ret = kvm_hv_ext_hypercall(...)
> >                 break;
> >
> >
> > That maintains backwards compatibility with "exit on everything" as userspace
> > still needs to opt-in to having KVM handle specific hypercalls in-kernel, and it
> > provides the necessary knob for userspace to tell KVM which hypercalls should be
> > allowed, i.e. ensures KVM doesn't violate HV_EXT_CALL_QUERY_CAPABILITIES.
> 
> So, should I send a version with KVM capability similar to above

No, the above was just a sketch of how we can extend support if necessary.  In
general, we try to avoid creating _any_ APIs before they are strictly required.
For uAPIs, that's pretty much a hard rule.

> or for now just send the version which by default exit to userspace and later
> whenever the need arises KVM capability can be added then?

This one please :-)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ