lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <86o7iidzwb.wl-maz@kernel.org>
Date:   Mon, 04 Sep 2023 10:57:24 +0100
From:   Marc Zyngier <maz@...nel.org>
To:     Xu Zhao <zhaoxu.35@...edance.com>
Cc:     oliver.upton@...ux.dev, james.morse@....com,
        linux-arm-kernel@...ts.infradead.org, kvmarm@...ts.linux.dev,
        linux-kernel@...r.kernel.org, kvm@...r.kernel.org
Subject: Re: [RFC v2] KVM: arm/arm64: optimize vSGI injection performance

On Fri, 25 Aug 2023 02:58:11 +0100,
Xu Zhao <zhaoxu.35@...edance.com> wrote:
> 
> In a VM with more than 16 vCPUs (with multiple aff0 groups), if the target 
> vCPU of a vSGI exceeds 16th vCPU, kvm needs to iterate from vCPU0 until 
> the target vCPU is found. However, affinity routing information is provided 
> by the ICC_SGI* register, which allows kvm to bypass other aff0 groups, 
> iterating only on the aff0 group that the target vCPU located. It reduces 
> the maximum iteration times from the total number of vCPUs to 16, or even 
> 8 times.
> 
> This patch aims to optimize the vSGI injecting performance of injecting 
> target exceeds 16th vCPU in vm with more than 16 vCPUs.

The problem is that you optimise it for the default case, and break it
for *everything* else.

[...]

> The performance of VM witch 32 cores improvement can be observed. When
> injecting SGI into the first vCPU of the first aff0 group, the performance 
> remains the same as before (because the number of iteration is also 1), 
> but there is an improvement in performance when injecting interrupts into 
> the last vCPU. When injecting vSGI into the first and last vCPU of the 
> second aff0 group, the performance improvement is significant because 
> compared to the original algorithm, it skipped iterates the first aff0 
> group.
> 
> BTW, performance improvement can also be observed by microbench in 
> kvm-unit-test with little modification :add 32 cores initialization, 
> then change IPI target CPU in function ipi_exec.
> 
> The more vcpu a VM has, the greater the performance improvement of injecting 
> vSGI into the vCPU in the last aff0 group.
> 
> Signed-off-by: Xu Zhao <zhaoxu.35@...edance.com>
> ---
>  arch/arm64/kvm/vgic/vgic-mmio-v3.c | 152 ++++++++++++++---------------
>  include/linux/kvm_host.h           |   5 +
>  2 files changed, 78 insertions(+), 79 deletions(-)
> 
> diff --git a/arch/arm64/kvm/vgic/vgic-mmio-v3.c b/arch/arm64/kvm/vgic/vgic-mmio-v3.c
> index 188d2187eede..af8f2d6b18c3 100644
> --- a/arch/arm64/kvm/vgic/vgic-mmio-v3.c
> +++ b/arch/arm64/kvm/vgic/vgic-mmio-v3.c
> @@ -1013,44 +1013,64 @@ int vgic_v3_has_attr_regs(struct kvm_device *dev, struct kvm_device_attr *attr)
>  
>  	return 0;
>  }
> +
>  /*
> - * Compare a given affinity (level 1-3 and a level 0 mask, from the SGI
> - * generation register ICC_SGI1R_EL1) with a given VCPU.
> - * If the VCPU's MPIDR matches, return the level0 affinity, otherwise
> - * return -1.
> + * Get affinity routing index from ICC_SGI_* register
> + * format:
> + *     aff3       aff2       aff1	aff0
> + * |- 8 bits -|- 8 bits -|- 8 bits -|- 4 bits -|
>   */
> -static int match_mpidr(u64 sgi_aff, u16 sgi_cpu_mask, struct kvm_vcpu *vcpu)
> +static unsigned long sgi_to_affinity(unsigned long reg)
>  {
> -	unsigned long affinity;
> -	int level0;
> +	u64 aff;
>  
> -	/*
> -	 * Split the current VCPU's MPIDR into affinity level 0 and the
> -	 * rest as this is what we have to compare against.
> -	 */
> -	affinity = kvm_vcpu_get_mpidr_aff(vcpu);
> -	level0 = MPIDR_AFFINITY_LEVEL(affinity, 0);
> -	affinity &= ~MPIDR_LEVEL_MASK;
> +	/* aff3 - aff1 */
> +	aff = (((reg) & ICC_SGI1R_AFFINITY_3_MASK) >> ICC_SGI1R_AFFINITY_3_SHIFT) << 16 |
> +		(((reg) & ICC_SGI1R_AFFINITY_2_MASK) >> ICC_SGI1R_AFFINITY_2_SHIFT) << 8 |
> +		(((reg) & ICC_SGI1R_AFFINITY_1_MASK) >> ICC_SGI1R_AFFINITY_1_SHIFT);

Here, you assume that you can directly map a vcpu index to an
affinity. It would be awesome if that was the case. However, this is
only valid at reset time, and userspace is perfectly allowed to change
this mapping by writing to the vcpu's MPIDR_EL1.

So this won't work at all if userspace wants to set its own specific
CPU numbering.

	M.

-- 
Without deviation from the norm, progress is not possible.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ