[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <dd9f1224b3b21ad793862406bd8855ba@kernel.org>
Date: Fri, 28 Feb 2020 19:37:48 +0000
From: Marc Zyngier <maz@...nel.org>
To: Zenghui Yu <yuzenghui@...wei.com>
Cc: linux-arm-kernel@...ts.infradead.org, kvmarm@...ts.cs.columbia.edu,
kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
Lorenzo Pieralisi <lorenzo.pieralisi@....com>,
Jason Cooper <jason@...edaemon.net>,
Robert Richter <rrichter@...vell.com>,
Thomas Gleixner <tglx@...utronix.de>,
Eric Auger <eric.auger@...hat.com>,
James Morse <james.morse@....com>,
Julien Thierry <julien.thierry.kdev@...il.com>,
Suzuki K Poulose <suzuki.poulose@....com>
Subject: Re: [PATCH v4 08/20] irqchip/gic-v4.1: Plumb get/set_irqchip_state
SGI callbacks
On 2020-02-20 03:11, Zenghui Yu wrote:
> Hi Marc,
>
> On 2020/2/18 23:31, Marc Zyngier wrote:
>> diff --git a/drivers/irqchip/irq-gic-v3-its.c
>> b/drivers/irqchip/irq-gic-v3-its.c
>> index 7656b353a95f..0ed286dba827 100644
>> --- a/drivers/irqchip/irq-gic-v3-its.c
>> +++ b/drivers/irqchip/irq-gic-v3-its.c
>> @@ -144,7 +144,7 @@ struct event_lpi_map {
>> u16 *col_map;
>> irq_hw_number_t lpi_base;
>> int nr_lpis;
>> - raw_spinlock_t vlpi_lock;
>> + raw_spinlock_t map_lock;
>
> So we use map_lock to protect both LPI's and VLPI's mapping affinity of
> a device, and use vpe_lock to protect vPE's affinity, OK.
>
>> struct its_vm *vm;
>> struct its_vlpi_map *vlpi_maps;
>> int nr_vlpis;
>> @@ -240,15 +240,33 @@ static struct its_vlpi_map *get_vlpi_map(struct
>> irq_data *d)
>> return NULL;
>> }
>>
>> -static int irq_to_cpuid(struct irq_data *d)
>> +static int irq_to_cpuid_lock(struct irq_data *d, unsigned long
>> *flags)
>> {
>> - struct its_device *its_dev = irq_data_get_irq_chip_data(d);
>> struct its_vlpi_map *map = get_vlpi_map(d);
>> + int cpu;
>>
>> - if (map)
>> - return map->vpe->col_idx;
>> + if (map) {
>> + raw_spin_lock_irqsave(&map->vpe->vpe_lock, *flags);
>> + cpu = map->vpe->col_idx;
>> + } else {
>> + struct its_device *its_dev = irq_data_get_irq_chip_data(d);
>> + raw_spin_lock_irqsave(&its_dev->event_map.map_lock, *flags);
>> + cpu = its_dev->event_map.col_map[its_get_event_id(d)];
>> + }
>>
>> - return its_dev->event_map.col_map[its_get_event_id(d)];
>> + return cpu;
>> +}
>
> This helper is correct for normal LPIs and VLPIs, but wrong for per-vPE
> IRQ (doorbell) and vSGIs. irq_data_get_irq_chip_data() gets confused by
> both of them.
Yes, I've fixed that in the current state of the tree last week. Do have
a
look if you can, but it seems to survive on both the model with v4.1 and
my D05.
[...]
>> - rdbase = per_cpu_ptr(gic_rdists->rdist,
>> vpe->col_idx)->rd_base;
>> + cpu = irq_to_cpuid_lock(d, &flags);
>> + rdbase = per_cpu_ptr(gic_rdists->rdist, cpu)->rd_base;
>> gic_write_lpir(d->parent_data->hwirq, rdbase +
>> GICR_INVLPIR);
>> wait_for_syncr(rdbase);
>> + irq_to_cpuid_unlock(d, flags);
>> } else {
>> its_vpe_send_cmd(vpe, its_send_inv);
>> }
>
> Do we really need to grab the vpe_lock for those which are belong to
> the same irqchip with its_vpe_set_affinity()? The IRQ core code should
> already ensure the mutual exclusion among them, wrong?
I've been trying to think about that, but jet-lag keeps getting in the
way.
I empirically think that you are right, but I need to go and check the
various
code paths to be sure. Hopefully I'll have a bit more brain space next
week.
For sure this patch tries to do too many things at once...
M.
--
Jazz is not dead. It just smells funny...
Powered by blists - more mailing lists