[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <b6fa5a94-fce9-dafa-f470-5ffb196ed666@huawei.com>
Date: Mon, 28 Jan 2019 21:59:39 +0800
From: "liwei (GF)" <liwei391@...wei.com>
To: Julien Thierry <julien.thierry@....com>,
<linux-arm-kernel@...ts.infradead.org>, <marc.zyngier@....com>
CC: <linux-kernel@...r.kernel.org>, <daniel.thompson@...aro.org>,
<joel@...lfernandes.org>, <christoffer.dall@....com>,
<james.morse@....com>, <catalin.marinas@....com>,
<will.deacon@....com>, <mark.rutland@....com>,
Thomas Gleixner <tglx@...utronix.de>,
Jason Cooper <jason@...edaemon.net>,
libin 00196512 <huawei.libin@...wei.com>,
guohanjun 00470146 <guohanjun@...wei.com>
Subject: Re: [PATCH v9 22/26] irqchip/gic-v3: Allow interrupts to be set as
pseudo-NMI
Hello Julien & Marc,
Thanks for your reply, I misunderstood the usage of ready_percpu_nmi() and
teardown_percpu_nmi() indeed.
I think that adding a note about this in the comment of request_percpu_nmi() will be nice.
Regards,
Wei Li
On 2019/1/28 16:57, Julien Thierry wrote:
> Hi,
>
> On 26/01/2019 10:19, liwei (GF) wrote:
>>
>>
>> On 2019/1/21 23:33, Julien Thierry wrote:
>>> Implement NMI callbacks for GICv3 irqchip. Install NMI safe handlers
>>> when setting up interrupt line as NMI.
>>>
>>> Only SPIs and PPIs are allowed to be set up as NMI.
>>>
>>> Signed-off-by: Julien Thierry <julien.thierry@....com>
>>> Cc: Thomas Gleixner <tglx@...utronix.de>
>>> Cc: Jason Cooper <jason@...edaemon.net>
>>> Cc: Marc Zyngier <marc.zyngier@....com>
>>> ---
>>> drivers/irqchip/irq-gic-v3.c | 84 ++++++++++++++++++++++++++++++++++++++++++++
>>> 1 file changed, 84 insertions(+)
>>>
>>> diff --git a/drivers/irqchip/irq-gic-v3.c b/drivers/irqchip/irq-gic-v3.c
>>> index 4df1e94..447d8ab 100644
>>> --- a/drivers/irqchip/irq-gic-v3.c
>>> +++ b/drivers/irqchip/irq-gic-v3.c
>> (snip)
>>>
>>> +static int gic_irq_nmi_setup(struct irq_data *d)
>>> +{
>>> + struct irq_desc *desc = irq_to_desc(d->irq);
>>> +
>>> + if (!gic_supports_nmi())
>>> + return -EINVAL;
>>> +
>>> + if (gic_peek_irq(d, GICD_ISENABLER)) {
>>> + pr_err("Cannot set NMI property of enabled IRQ %u\n", d->irq);
>>> + return -EINVAL;
>>> + }
>>> +
>>> + /*
>>> + * A secondary irq_chip should be in charge of LPI request,
>>> + * it should not be possible to get there
>>> + */
>>> + if (WARN_ON(gic_irq(d) >= 8192))
>>> + return -EINVAL;
>>> +
>>> + /* desc lock should already be held */
>>> + if (gic_irq(d) < 32) {
>>> + /* Setting up PPI as NMI, only switch handler for first NMI */
>>> + if (!refcount_inc_not_zero(&ppi_nmi_refs[gic_irq(d) - 16])) {
>>> + refcount_set(&ppi_nmi_refs[gic_irq(d) - 16], 1);
>>> + desc->handle_irq = handle_percpu_devid_fasteoi_nmi;
>>> + }
>>> + } else {
>>> + desc->handle_irq = handle_fasteoi_nmi;
>>> + }
>>> +
>>> + gic_set_irq_prio(gic_irq(d), gic_dist_base(d), GICD_INT_NMI_PRI);
>>> +
>>> + return 0;
>>> +}
>>> +
>>> +static void gic_irq_nmi_teardown(struct irq_data *d)
>>> +{
>>> + struct irq_desc *desc = irq_to_desc(d->irq);
>>> +
>>> + if (WARN_ON(!gic_supports_nmi()))
>>> + return;
>>> +
>>> + if (gic_peek_irq(d, GICD_ISENABLER)) {
>>> + pr_err("Cannot set NMI property of enabled IRQ %u\n", d->irq);
>>> + return;
>>> + }
>>> +
>>> + /*
>>> + * A secondary irq_chip should be in charge of LPI request,
>>> + * it should not be possible to get there
>>> + */
>>> + if (WARN_ON(gic_irq(d) >= 8192))
>>> + return;
>>> +
>>> + /* desc lock should already be held */
>>> + if (gic_irq(d) < 32) {
>>> + /* Tearing down NMI, only switch handler for last NMI */
>>> + if (refcount_dec_and_test(&ppi_nmi_refs[gic_irq(d) - 16]))
>>> + desc->handle_irq = handle_percpu_devid_irq;
>>> + } else {
>>> + desc->handle_irq = handle_fasteoi_irq;
>>> + }
>>> +
>>> + gic_set_irq_prio(gic_irq(d), gic_dist_base(d), GICD_INT_DEF_PRI);
>>> +}
>>> +
>>
>> Hello Julien,
>> I am afraid the setting of priority is not correct here. If the irq is in redistributor(gic_irq(d) < 32),
>> we should set the priority on each cpu, while we just set the priority on the current cpu here.
>
> As Marc stated, to work with PPIs, the core IRQ API provides a set of
> *_percpu_irq functions (request, enable, disable...).
>
> The current idea is that the driver is in charge of calling
> ready_percpu_nmi() before enabling on the correct CPU, in a similar
> manner that the driver is in charge of calling enable_percpu_irq() and
> disable_percpu_irq() on the correct CPU.
>
>
>> static inline void __iomem *gic_dist_base(struct irq_data *d)
>> {
>> if (gic_irq_in_rdist(d)) /* SGI+PPI -> SGI_base for this CPU */
>> return gic_data_rdist_sgi_base();
>>
>> if (d->hwirq <= 1023) /* SPI -> dist_base */
>> return gic_data.dist_base;
>>
>> return NULL;
>> }
>>
>> I tried to add a smp_call_function here, but the kernel reported a warning as we have disabled irq
>> when calling raw_spin_lock_irqsave in request_nmi or ready_percpu_nmi.
>> [ 2.137262] Call trace:
>> [ 2.137265] smp_call_function_many+0xf8/0x3a0
>> [ 2.137267] smp_call_function+0x40/0x58
>> [ 2.137271] gic_irq_nmi_setup+0xe8/0x118
>> [ 2.137275] ready_percpu_nmi+0x6c/0xf0> [ 2.137279] armpmu_request_irq+0x228/0x250
>
> Your issue lies here, if your PMU IRQ is a PPI, you shouldn't be calling
> ready_percpu_nmi() at the time you request but probably somewhere like
> arm_perf_starting_cpu().
>
> And you wouldn't need the smp_call to set the priority.
>
> Hope this helps.
>
>> [ 2.137281] arm_pmu_acpi_init+0x150/0x2f0
>> [ 2.137284] do_one_initcall+0x54/0x218
>> [ 2.137289] kernel_init_freeable+0x230/0x354
>> [ 2.137293] kernel_init+0x18/0x118
>> [ 2.137295] ret_from_fork+0x10/0x18
>>
>
> Thanks,
>
Powered by blists - more mailing lists