lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 10 Nov 2021 13:20:32 +0000
From:   Marc Zyngier <maz@...nel.org>
To:     Sunil Muthuswamy <sunilmut@...ux.microsoft.com>
Cc:     kys@...rosoft.com, haiyangz@...rosoft.com, sthemmin@...rosoft.com,
        wei.liu@...nel.org, decui@...rosoft.com, tglx@...utronix.de,
        mingo@...hat.com, bp@...en8.de, hpa@...or.com,
        lorenzo.pieralisi@....com, robh@...nel.org, kw@...ux.com,
        bhelgaas@...gle.com, arnd@...db.de, sunilmut@...rosoft.com,
        x86@...nel.org, linux-kernel@...r.kernel.org,
        linux-hyperv@...r.kernel.org, linux-pci@...r.kernel.org,
        linux-arch@...r.kernel.org
Subject: Re: [PATCH v4 2/2] arm64: PCI: hv: Add support for Hyper-V vPCI

On Tue, 09 Nov 2021 22:14:20 +0000,
Sunil Muthuswamy <sunilmut@...ux.microsoft.com> wrote:
> 
> From: Sunil Muthuswamy <sunilmut@...rosoft.com>
> 
> Add support for Hyper-V vPCI for arm64 by implementing the arch specific
> interfaces. Introduce an IRQ domain and chip specific to Hyper-v vPCI that
> is based on SPIs. The IRQ domain parents itself to the arch GIC IRQ domain
> for basic vector management.
> 
> Signed-off-by: Sunil Muthuswamy <sunilmut@...rosoft.com>
> ---
> In v2, v3 & v4:
>  Changes are described in the cover letter.
> 
>  arch/arm64/include/asm/hyperv-tlfs.h |   9 ++
>  drivers/pci/Kconfig                  |   2 +-
>  drivers/pci/controller/Kconfig       |   2 +-
>  drivers/pci/controller/pci-hyperv.c  | 207 ++++++++++++++++++++++++++-
>  4 files changed, 217 insertions(+), 3 deletions(-)

[...]

> +static int hv_pci_vec_irq_domain_activate(struct irq_domain *domain,
> +					  struct irq_data *irqd, bool reserve)
> +{
> +	static int cpu;
> +
> +	/*
> +	 * Pick a cpu using round-robin as the irq affinity that can be
> +	 * temporarily used for composing MSI from the hypervisor. GIC
> +	 * will eventually set the right affinity for the irq and the
> +	 * 'unmask' will retarget the interrupt to that cpu.
> +	 */
> +	if (cpu >= cpumask_last(cpu_online_mask))
> +		cpu = 0;
> +	cpu = cpumask_next(cpu, cpu_online_mask);
> +	irq_data_update_effective_affinity(irqd, cpumask_of(cpu));

The mind boggles.

Let's imagine a single machine. cpu_online_mask only has bit 0 set,
and nr_cpumask_bits is 1. This is the first run, and cpu is 1:

	cpu = cpumask_next(cpu, cpu_online_mask);

cpu is now set to 1. Which is not a valid CPU number, but a valid
return value indicating that there is no next CPU as it is equal to
nr_cpumask_bits. cpumask_of(cpu) will then diligently return crap,
which you carefully store into the irq descriptor. The IRQ subsystem
thanks you.

The same reasoning applies to any number of CPUs, and you obviously
never checked what any of this does :-(. As to what the behaviour is
when multiple CPUs run this function in parallel, let's not even
bother (locking is overrated).

Logic and concurrency issues aside, why do you even bother setting
some round-robin affinity if all you need is to set *something* so
that a hypervisor message can be composed? Why not use the first
online CPU? At least it will be correct.

	M.

-- 
Without deviation from the norm, progress is not possible.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ