[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <874klmqu2r.fsf@nanos.tec.linutronix.de>
Date: Wed, 18 Nov 2020 21:38:20 +0100
From: Thomas Gleixner <tglx@...utronix.de>
To: John Garry <john.garry@...wei.com>, gregkh@...uxfoundation.org,
rafael@...nel.org, martin.petersen@...cle.com, jejb@...ux.ibm.com
Cc: linuxarm@...wei.com, linux-scsi@...r.kernel.org,
linux-kernel@...r.kernel.org, maz@...nel.org
Subject: Re: [PATCH v2 1/3] genirq/affinity: Add irq_update_affinity_desc()
John,
On Wed, Nov 18 2020 at 11:34, John Garry wrote:
>> +/**
>> + * irq_update_affinity_desc - Update affinity management for an interrupt
>> + * @irq: The interrupt number to update
>> + * @affinity: Pointer to the affinity descriptor
>> + *
>> + * This interface can be used to configure the affinity management of
>> + * interrupts which have been allocated already.
>> + */
>> +int irq_update_affinity_desc(unsigned int irq,
>> + struct irq_affinity_desc *affinity)
>
> Just a note on the return value, in the only current callsite -
> platform_get_irqs_affinity() - we don't check the return value and
> propagate the error. This is because we don't want to fail the interrupt
> init just because of problems updating the affinity mask. So I could
> print a message to inform the user of error (at the callsite).
Well, not sure about that. During init on a platform which does not have
the issues with reservation mode there failure cases are:
1) Interrupt does not exist. Definitely a full fail
2) Interrupt is already started up. Not a good idea on init() and
a clear fail.
3) Interrupt has already been switched to managed. Double init is not
really a good sign either.
>> + /* Requires the interrupt to be shut down */
>> + if (irqd_is_started(&desc->irq_data))
>
> We're missing the unlock here, right?
Duh yes.
>> + return -EBUSY;
>> +
>> + /* Interrupts which are already managed cannot be modified */
>> + if (irqd_is_managed(&desc->irq_data))
>
> And here, and I figure that this should be irqd_affinity_is_managed()
More duh :)
I assume you send a fixed variant of this.
Thanks,
tglx
Powered by blists - more mailing lists