lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 19 Mar 2020 10:27:44 +0000
From:   Marc Zyngier <maz@...nel.org>
To:     Auger Eric <eric.auger@...hat.com>
Cc:     linux-arm-kernel@...ts.infradead.org, kvmarm@...ts.cs.columbia.edu,
        kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
        Lorenzo Pieralisi <lorenzo.pieralisi@....com>,
        Jason Cooper <jason@...edaemon.net>,
        Robert Richter <rrichter@...vell.com>,
        Thomas Gleixner <tglx@...utronix.de>,
        Zenghui Yu <yuzenghui@...wei.com>,
        James Morse <james.morse@....com>,
        Julien Thierry <julien.thierry.kdev@...il.com>,
        Suzuki K Poulose <suzuki.poulose@....com>
Subject: Re: [PATCH v5 11/23] irqchip/gic-v4.1: Plumb get/set_irqchip_state
 SGI callbacks

Hi Eric,

On 2020-03-16 21:43, Auger Eric wrote:
> Hi Marc,
> 
> On 3/4/20 9:33 PM, Marc Zyngier wrote:
>> To implement the get/set_irqchip_state callbacks (limited to the
>> PENDING state), we have to use a particular set of hacks:
>> 
>> - Reading the pending state is done by using a pair of new 
>> redistributor
>>   registers (GICR_VSGIR, GICR_VSGIPENDR), which allow the 16 
>> interrupts
>>   state to be retrieved.
>> - Setting the pending state is done by generating it as we'd otherwise 
>> do
>>   for a guest (writing to GITS_SGIR).
>> - Clearing the pending state is done by emiting a VSGI command with 
>> the
> emitting
>>   "clear" bit set.
>> 
>> This requires some interesting locking though:
>> - When talking to the redistributor, we must make sure that the VPE
>>   affinity doesn't change, hence taking the VPE lock.
>> - At the same time, we must ensure that nobody accesses the same
>>   redistributor's GICR_VSGI*R registers for a different VPE, which
> GICR_VSGIR
>>   would corrupt the reading of the pending bits. We thus take the
>>   per-RD spinlock. Much fun.
>> 
>> Signed-off-by: Marc Zyngier <maz@...nel.org>
>> ---
>>  drivers/irqchip/irq-gic-v3-its.c   | 73 
>> ++++++++++++++++++++++++++++++
>>  include/linux/irqchip/arm-gic-v3.h | 14 ++++++
>>  2 files changed, 87 insertions(+)
>> 
>> diff --git a/drivers/irqchip/irq-gic-v3-its.c 
>> b/drivers/irqchip/irq-gic-v3-its.c
>> index c93f178914ee..fb2b836c31ff 100644
>> --- a/drivers/irqchip/irq-gic-v3-its.c
>> +++ b/drivers/irqchip/irq-gic-v3-its.c
>> @@ -3962,11 +3962,84 @@ static int its_sgi_set_affinity(struct 
>> irq_data *d,
>>  	return -EINVAL;
>>  }
>> 
>> +static int its_sgi_set_irqchip_state(struct irq_data *d,
>> +				     enum irqchip_irq_state which,
>> +				     bool state)
>> +{
>> +	if (which != IRQCHIP_STATE_PENDING)
>> +		return -EINVAL;
>> +
>> +	if (state) {
>> +		struct its_vpe *vpe = irq_data_get_irq_chip_data(d);
>> +		struct its_node *its = find_4_1_its();
>> +		u64 val;
>> +
>> +		val  = FIELD_PREP(GITS_SGIR_VPEID, vpe->vpe_id);
>> +		val |= FIELD_PREP(GITS_SGIR_VINTID, d->hwirq);
>> +		writeq_relaxed(val, its->sgir_base + GITS_SGIR - SZ_128K);
>> +	} else {
>> +		its_configure_sgi(d, true);
>> +	}
>> +
>> +	return 0;
>> +}
>> +
>> +static int its_sgi_get_irqchip_state(struct irq_data *d,
>> +				     enum irqchip_irq_state which, bool *val)
>> +{
>> +	struct its_vpe *vpe = irq_data_get_irq_chip_data(d);
>> +	void __iomem *base;
>> +	unsigned long flags;
>> +	u32 count = 1000000;	/* 1s! */
> where does it come from? I did not find any info in the spec about this
> delay.

There is no such thing in the spec. However, these timeouts have proven 
to be
very useful in detecting broken HW (I'm lucky enough to have such 
wonders
at hand...), as well as SW bugs. The ! second value is purely empirical
(if a whole second is not enough for things to move, they're as good as 
dead).

>> +	u32 status;
>> +	int cpu;
>> +
>> +	if (which != IRQCHIP_STATE_PENDING)
>> +		return -EINVAL;
>> +
>> +	/*
>> +	 * Locking galore! We can race against two different events:
>> +	 *
>> +	 * - Concurent vPE affinity change: we must make sure it cannot
>> +         *   happen, or we'll talk to the wrong redistributor. This 
>> is
>> +         *   identical to what happens with vLPIs.
>> +	 *
>> +	 * - Concurrent VSGIPENDR access: As it involves accessing two
>> +         *   MMIO registers, this must be made atomic one way or 
>> another.
>> +	 */
>> +	cpu = vpe_to_cpuid_lock(vpe, &flags);
>> +	raw_spin_lock(&gic_data_rdist_cpu(cpu)->rd_lock);
>> +	base = gic_data_rdist_cpu(cpu)->rd_base + SZ_128K;
>> +	writel_relaxed(vpe->vpe_id, base + GICR_VSGIR);
>> +	do {
>> +		status = readl_relaxed(base + GICR_VSGIPENDR);
>> +		if (!(status & GICR_VSGIPENDR_BUSY))
>> +			goto out;
>> +
>> +		count--;
>> +		if (!count) {
>> +			pr_err_ratelimited("Unable to get SGI status\n");
>> +			goto out;
>> +		}
>> +		cpu_relax();
>> +		udelay(1);
>> +	} while(count);
>> +
>> +out:
>> +	raw_spin_unlock(&gic_data_rdist_cpu(cpu)->rd_lock);
>> +	vpe_to_cpuid_unlock(vpe, flags);
>> +	*val = !!(status & (1 << d->hwirq));
>> +
>> +	return 0;
> cascade an error on timeout?

Sure, that's a good idea.

Thanks,

         M.
-- 
Jazz is not dead. It just smells funny...

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ