[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAHVum0fZZwsCEL-bMM2phfiukSF0_M_-DiTEhYg562FYSnf6_g@mail.gmail.com>
Date: Thu, 24 Nov 2022 01:03:24 -0800
From: Vipin Sharma <vipinsh@...gle.com>
To: Vitaly Kuznetsov <vkuznets@...hat.com>
Cc: kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
seanjc@...gle.com, pbonzini@...hat.com, dmatlack@...gle.com
Subject: Re: [PATCH v2 2/6] KVM: x86: hyper-v: Add extended hypercall support
in Hyper-v
On Thu, Nov 24, 2022 at 12:36 AM Vitaly Kuznetsov <vkuznets@...hat.com> wrote:
>
> Vipin Sharma <vipinsh@...gle.com> writes:
>
> > On Tue, Nov 22, 2022 at 8:29 AM Vitaly Kuznetsov <vkuznets@...hat.com> wrote:
> >>
> >> Vipin Sharma <vipinsh@...gle.com> writes:
> >>
> >> > +/*
> >> > + * The TLFS carves out 64 possible extended hypercalls, numbered sequentially
> >> > + * after the base capabilities extended hypercall.
> >> > + */
> >> > +#define HV_EXT_CALL_MAX (HV_EXT_CALL_QUERY_CAPABILITIES + 64)
> >> > +
> >>
> >> First, I thought there's an off-by-one here (and should be '63') but
> >> then I checked with TLFS and figured out that the limit comes from
> >> HvExtCallQueryCapabilities's response which doesn't include itself
> >> (0x8001) in the mask, this means it can encode
> >>
> >> 0x8002 == bit0
> >> 0x8003 == bit1
> >> ..
> >> 0x8041 == bit63
> >>
> >> so indeed, the last one supported is 0x8041 == 0x8001 + 64
> >>
> >> maybe it's worth extending the commont on where '64' comes from.
> >>
> >
> > Yeah, I will expand comments.
> >
> >> > static void stimer_mark_pending(struct kvm_vcpu_hv_stimer *stimer,
> >> > bool vcpu_kick);
> >> >
> >> > @@ -2411,6 +2417,9 @@ static bool hv_check_hypercall_access(struct kvm_vcpu_hv *hv_vcpu, u16 code)
> >> > case HVCALL_SEND_IPI:
> >> > return hv_vcpu->cpuid_cache.enlightenments_eax &
> >> > HV_X64_CLUSTER_IPI_RECOMMENDED;
> >> > + case HV_EXT_CALL_QUERY_CAPABILITIES ... HV_EXT_CALL_MAX:
> >> > + return hv_vcpu->cpuid_cache.features_ebx &
> >> > + HV_ENABLE_EXTENDED_HYPERCALLS;
> >> > default:
> >> > break;
> >> > }
> >> > @@ -2564,6 +2573,12 @@ int kvm_hv_hypercall(struct kvm_vcpu *vcpu)
> >> > }
> >> > goto hypercall_userspace_exit;
> >> > }
> >> > + case HV_EXT_CALL_QUERY_CAPABILITIES ... HV_EXT_CALL_MAX:
> >> > + if (unlikely(hc.fast)) {
> >> > + ret = HV_STATUS_INVALID_PARAMETER;
> >>
> >> I wasn't able to find any statement in TLFS stating whether extended
> >> hypercalls can be 'fast', I can imagine e.g. MemoryHeatHintAsync using
> >> it. Unfortunatelly, our userspace exit will have to be modified to
> >> handle such stuff. This can stay for the time being I guess..
> >>
> >
> > I agree TLFS doesn't state anything about "fast" extended hypercall
> > but nothing stops in future for some call to be "fast". I think this
> > condition should also be handled by userspace as it is handling
> > everything else.
> >
> > I will remove it in the next version of the patch. I don't see any
> > value in verification here.
>
> The problem is that we don't currently pass 'fast' flag to userspace,
> let alone XMM registers. This means that it won't be able to handle fast
> hypercalls anyway, I guess it's better to keep your check but add a
> comment saying that it's an implementation shortcoming and not a TLFS
> requirement.
>
I think "fast" flag gets passed to the userspace via:
vcpu->run->hyperv.u.hcall.input = hc.param;
Yeah, XMM registers won't be passed, that will require userspace API change.
I will keep the check and explain in the comments.
>
> >
> >> > + break;
> >> > + }
> >> > + goto hypercall_userspace_exit;
> >> > default:
> >> > ret = HV_STATUS_INVALID_HYPERCALL_CODE;
> >> > break;
> >> > @@ -2722,6 +2737,7 @@ int kvm_get_hv_cpuid(struct kvm_vcpu *vcpu, struct kvm_cpuid2 *cpuid,
> >> >
> >> > ent->ebx |= HV_POST_MESSAGES;
> >> > ent->ebx |= HV_SIGNAL_EVENTS;
> >> > + ent->ebx |= HV_ENABLE_EXTENDED_HYPERCALLS;
> >> >
> >> > ent->edx |= HV_X64_HYPERCALL_XMM_INPUT_AVAILABLE;
> >> > ent->edx |= HV_FEATURE_FREQUENCY_MSRS_AVAILABLE;
> >>
> >> Reviewed-by: Vitaly Kuznetsov <vkuznets@...hat.com>
> >>
> >> --
> >> Vitaly
> >>
> >
>
> --
> Vitaly
>
Powered by blists - more mailing lists