[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20211006102450.4fkn46yqfbbh7i6y@gator.home>
Date: Wed, 6 Oct 2021 12:24:50 +0200
From: Andrew Jones <drjones@...hat.com>
To: Marc Zyngier <maz@...nel.org>
Cc: linux-arm-kernel@...ts.infradead.org, kvmarm@...ts.cs.columbia.edu,
kvm@...r.kernel.org, linux-kernel@...r.kernel.org, will@...nel.org,
qperret@...gle.com, dbrazdil@...gle.com,
Steven Price <steven.price@....com>,
Fuad Tabba <tabba@...gle.com>,
Srivatsa Vaddagiri <vatsa@...eaurora.org>,
Shanker R Donthineni <sdonthineni@...dia.com>,
James Morse <james.morse@....com>,
Suzuki K Poulose <suzuki.poulose@....com>,
Alexandru Elisei <alexandru.elisei@....com>,
kernel-team@...roid.com
Subject: Re: [PATCH v2 01/16] KVM: arm64: Generalise VM features into a set
of flags
On Mon, Oct 04, 2021 at 06:48:34PM +0100, Marc Zyngier wrote:
> We currently deal with a set of booleans for VM features,
> while they could be better represented as set of flags
> contained in an unsigned long, similarily to what we are
> doing on the CPU side.
>
> Signed-off-by: Marc Zyngier <maz@...nel.org>
> ---
> arch/arm64/include/asm/kvm_host.h | 12 +++++++-----
> arch/arm64/kvm/arm.c | 5 +++--
> arch/arm64/kvm/mmio.c | 3 ++-
> 3 files changed, 12 insertions(+), 8 deletions(-)
>
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index f8be56d5342b..f63ca8fb4e58 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -122,7 +122,10 @@ struct kvm_arch {
> * should) opt in to this feature if KVM_CAP_ARM_NISV_TO_USER is
> * supported.
> */
> - bool return_nisv_io_abort_to_user;
> +#define KVM_ARCH_FLAG_RETURN_NISV_IO_ABORT_TO_USER 0
> + /* Memory Tagging Extension enabled for the guest */
> +#define KVM_ARCH_FLAG_MTE_ENABLED 1
> + unsigned long flags;
>
> /*
> * VM-wide PMU filter, implemented as a bitmap and big enough for
> @@ -133,9 +136,6 @@ struct kvm_arch {
>
> u8 pfr0_csv2;
> u8 pfr0_csv3;
> -
> - /* Memory Tagging Extension enabled for the guest */
> - bool mte_enabled;
> };
>
> struct kvm_vcpu_fault_info {
> @@ -786,7 +786,9 @@ bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu);
> #define kvm_arm_vcpu_sve_finalized(vcpu) \
> ((vcpu)->arch.flags & KVM_ARM64_VCPU_SVE_FINALIZED)
>
> -#define kvm_has_mte(kvm) (system_supports_mte() && (kvm)->arch.mte_enabled)
> +#define kvm_has_mte(kvm) \
> + (system_supports_mte() && \
> + test_bit(KVM_ARCH_FLAG_MTE_ENABLED, &(kvm)->arch.flags))
> #define kvm_vcpu_has_pmu(vcpu) \
> (test_bit(KVM_ARM_VCPU_PMU_V3, (vcpu)->arch.features))
>
> diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> index fe102cd2e518..ed9c89ec0b4f 100644
> --- a/arch/arm64/kvm/arm.c
> +++ b/arch/arm64/kvm/arm.c
> @@ -89,7 +89,8 @@ int kvm_vm_ioctl_enable_cap(struct kvm *kvm,
> switch (cap->cap) {
> case KVM_CAP_ARM_NISV_TO_USER:
> r = 0;
> - kvm->arch.return_nisv_io_abort_to_user = true;
> + set_bit(KVM_ARCH_FLAG_RETURN_NISV_IO_ABORT_TO_USER,
> + &kvm->arch.flags);
> break;
> case KVM_CAP_ARM_MTE:
> mutex_lock(&kvm->lock);
> @@ -97,7 +98,7 @@ int kvm_vm_ioctl_enable_cap(struct kvm *kvm,
> r = -EINVAL;
> } else {
> r = 0;
> - kvm->arch.mte_enabled = true;
> + set_bit(KVM_ARCH_FLAG_MTE_ENABLED, &kvm->arch.flags);
> }
> mutex_unlock(&kvm->lock);
> break;
> diff --git a/arch/arm64/kvm/mmio.c b/arch/arm64/kvm/mmio.c
> index 3e2d8ba11a02..3dd38a151d2a 100644
> --- a/arch/arm64/kvm/mmio.c
> +++ b/arch/arm64/kvm/mmio.c
> @@ -135,7 +135,8 @@ int io_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa)
> * volunteered to do so, and bail out otherwise.
> */
> if (!kvm_vcpu_dabt_isvalid(vcpu)) {
> - if (vcpu->kvm->arch.return_nisv_io_abort_to_user) {
> + if (test_bit(KVM_ARCH_FLAG_RETURN_NISV_IO_ABORT_TO_USER,
> + &vcpu->kvm->arch.flags)) {
> run->exit_reason = KVM_EXIT_ARM_NISV;
> run->arm_nisv.esr_iss = kvm_vcpu_dabt_iss_nisv_sanitized(vcpu);
> run->arm_nisv.fault_ipa = fault_ipa;
> --
> 2.30.2
>
Maybe a kvm_arm_has_feature(struct kvm *kvm) helper would be nice to avoid
all the &vcpu->kvm->arch.flags types of references getting scattered
around.
Reviewed-by: Andrew Jones <drjones@...hat.com>
Thanks,
drew
Powered by blists - more mailing lists