lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1947070.CQOukoFCf9@workhorse>
Date: Mon, 05 Jan 2026 14:17:55 +0100
From: Nicolas Frattaroli <nicolas.frattaroli@...labora.com>
To: Boris Brezillon <boris.brezillon@...labora.com>
Cc: Steven Price <steven.price@....com>, Liviu Dudau <liviu.dudau@....com>,
 Maarten Lankhorst <maarten.lankhorst@...ux.intel.com>,
 Maxime Ripard <mripard@...nel.org>, Thomas Zimmermann <tzimmermann@...e.de>,
 David Airlie <airlied@...il.com>, Simona Vetter <simona@...ll.ch>,
 Chia-I Wu <olvaffe@...il.com>, Karunika Choo <karunika.choo@....com>,
 kernel@...labora.com, linux-kernel@...r.kernel.org,
 dri-devel@...ts.freedesktop.org
Subject:
 Re: [PATCH v6 1/3] drm/panthor: Extend IRQ helpers for mask
 modification/restoration

On Monday, 5 January 2026 12:12:20 Central European Standard Time Boris Brezillon wrote:
> On Tue, 23 Dec 2025 17:24:58 +0100
> Nicolas Frattaroli <nicolas.frattaroli@...labora.com> wrote:
> 
> > The current IRQ helpers do not guarantee mutual exclusion that covers
> > the entire transaction from accessing the mask member and modifying the
> > mask register.
> > 
> > This makes it hard, if not impossible, to implement mask modification
> > helpers that may change one of these outside the normal
> > suspend/resume/isr code paths.
> > 
> > Add a spinlock to struct panthor_irq that protects both the mask member
> > and register. Acquire it in all code paths that access these, but drop
> > it before processing the threaded handler function. Then, add the
> > aforementioned new helpers: mask_enable, mask_disable, and
> > resume_restore. The first two work by ORing and NANDing the mask bits,
> > and the latter relies on the new behaviour that panthor_irq::mask is not
> > set to 0 on suspend.
> > 
> > panthor_irq::suspended remains an atomic, as it's necessarily written to
> > outside the mask_lock in the suspend path.
> > 
> > Signed-off-by: Nicolas Frattaroli <nicolas.frattaroli@...labora.com>
> > ---
> >  drivers/gpu/drm/panthor/panthor_device.h | 68 +++++++++++++++++++++++++++-----
> >  1 file changed, 58 insertions(+), 10 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/panthor/panthor_device.h b/drivers/gpu/drm/panthor/panthor_device.h
> > index f35e52b9546a..bf554cf376fb 100644
> > --- a/drivers/gpu/drm/panthor/panthor_device.h
> > +++ b/drivers/gpu/drm/panthor/panthor_device.h
> > @@ -73,11 +73,14 @@ struct panthor_irq {
> >  	/** @irq: IRQ number. */
> >  	int irq;
> >  
> > -	/** @mask: Current mask being applied to xxx_INT_MASK. */
> > +	/** @mask: Values to write to xxx_INT_MASK if active. */
> >  	u32 mask;
> >  
> >  	/** @suspended: Set to true when the IRQ is suspended. */
> >  	atomic_t suspended;
> > +
> > +	/** @mask_lock: protects modifications to _INT_MASK and @mask */
> > +	spinlock_t mask_lock;
> >  };
> >  
> >  /**
> > @@ -410,6 +413,8 @@ static irqreturn_t panthor_ ## __name ## _irq_raw_handler(int irq, void *data)
> >  	struct panthor_irq *pirq = data;							\
> >  	struct panthor_device *ptdev = pirq->ptdev;						\
> >  												\
> > +	guard(spinlock_irqsave)(&pirq->mask_lock);						\
> > +												\
> >  	if (atomic_read(&pirq->suspended))							\
> >  		return IRQ_NONE;								\
> >  	if (!gpu_read(ptdev, __reg_prefix ## _INT_STAT))					\
> > @@ -424,9 +429,14 @@ static irqreturn_t panthor_ ## __name ## _irq_threaded_handler(int irq, void *da
> >  	struct panthor_irq *pirq = data;							\
> >  	struct panthor_device *ptdev = pirq->ptdev;						\
> >  	irqreturn_t ret = IRQ_NONE;								\
> > +	u32 mask;										\
> > +												\
> > +	scoped_guard(spinlock_irqsave, &pirq->mask_lock) {					\
> > +		mask = pirq->mask;								\
> > +	}											\
> >  												\
> >  	while (true) {										\
> > -		u32 status = gpu_read(ptdev, __reg_prefix ## _INT_RAWSTAT) & pirq->mask;	\
> > +		u32 status = (gpu_read(ptdev, __reg_prefix ## _INT_RAWSTAT) & mask);		\
> >  												\
> >  		if (!status)									\
> >  			break;									\
> > @@ -435,26 +445,44 @@ static irqreturn_t panthor_ ## __name ## _irq_threaded_handler(int irq, void *da
> >  		ret = IRQ_HANDLED;								\
> >  	}											\
> >  												\
> > -	if (!atomic_read(&pirq->suspended))							\
> > -		gpu_write(ptdev, __reg_prefix ## _INT_MASK, pirq->mask);			\
> > +	scoped_guard(spinlock_irqsave, &pirq->mask_lock) {					\
> > +		if (!atomic_read(&pirq->suspended)) {						\
> > +			/* Only restore the bits that were used and are still enabled */	\
> > +			gpu_write(ptdev, __reg_prefix ## _INT_MASK,				\
> > +				  gpu_read(ptdev, __reg_prefix ## _INT_MASK) |			\
> > +				  (mask & pirq->mask));						\
> > +		}										\
> > +	}											\
> >  												\
> >  	return ret;										\
> >  }												\
> >  												\
> >  static inline void panthor_ ## __name ## _irq_suspend(struct panthor_irq *pirq)			\
> >  {												\
> > -	pirq->mask = 0;										\
> > -	gpu_write(pirq->ptdev, __reg_prefix ## _INT_MASK, 0);					\
> > +	scoped_guard(spinlock_irqsave, &pirq->mask_lock) {					\
> > +		gpu_write(pirq->ptdev, __reg_prefix ## _INT_MASK, 0);				\
> > +	}											\
> >  	synchronize_irq(pirq->irq);								\
> >  	atomic_set(&pirq->suspended, true);							\
> >  }												\
> >  												\
> >  static inline void panthor_ ## __name ## _irq_resume(struct panthor_irq *pirq, u32 mask)	\
> 
> If pirq->mask is encoding the user-selected mask, there's no point
> passing it as an argument to _irq_resume().

There is. I don't want to refactor all of panthor_mmu and the
stuff it does with the mask. It needs to set-mask-and-resume in a
race-free manner, and that's not possible unless we keep this API
around, or we do some heavy refactoring. Remember that locks in the
kernel aren't reentrant, so we can't just acquire the lock in
panthor_mmu, set the mask, and then resume the IRQ, and then drop
the lock, as we'd be re-acquiring the lock in resume.

> 
> >  {												\
> > -	atomic_set(&pirq->suspended, false);							\
> > +	guard(spinlock_irqsave)(&pirq->mask_lock);						\
> > +												\
> >  	pirq->mask = mask;									\
> > -	gpu_write(pirq->ptdev, __reg_prefix ## _INT_CLEAR, mask);				\
> > -	gpu_write(pirq->ptdev, __reg_prefix ## _INT_MASK, mask);				\
> > +	atomic_set(&pirq->suspended, false);							\
> > +	gpu_write(pirq->ptdev, __reg_prefix ## _INT_CLEAR, pirq->mask);				\
> > +	gpu_write(pirq->ptdev, __reg_prefix ## _INT_MASK, pirq->mask);				\
> > +}												\
> > +												\
> > +static inline void panthor_ ## __name ## _irq_resume_restore(struct panthor_irq *pirq)		\
> 
> As mentioned above, I'd just change the semantics of _irq_resume() to
> match _irq_resume_restore() and drop _irq_resume_restore().
> 
> > +{												\
> > +	guard(spinlock_irqsave)(&pirq->mask_lock);						\
> > +												\
> > +	atomic_set(&pirq->suspended, false);							\
> > +	gpu_write(pirq->ptdev, __reg_prefix ## _INT_CLEAR, pirq->mask);				\
> > +	gpu_write(pirq->ptdev, __reg_prefix ## _INT_MASK, pirq->mask);				\
> >  }												\
> >  												\
> >  static int panthor_request_ ## __name ## _irq(struct panthor_device *ptdev,			\
> > @@ -463,13 +491,33 @@ static int panthor_request_ ## __name ## _irq(struct panthor_device *ptdev,			\
> >  {												\
> >  	pirq->ptdev = ptdev;									\
> >  	pirq->irq = irq;									\
> > -	panthor_ ## __name ## _irq_resume(pirq, mask);						\
> > +	pirq->mask = mask;									\
> > +	spin_lock_init(&pirq->mask_lock);							\
> > +	panthor_ ## __name ## _irq_resume_restore(pirq);					\
> >  												\
> >  	return devm_request_threaded_irq(ptdev->base.dev, irq,					\
> >  					 panthor_ ## __name ## _irq_raw_handler,		\
> >  					 panthor_ ## __name ## _irq_threaded_handler,		\
> >  					 IRQF_SHARED, KBUILD_MODNAME "-" # __name,		\
> >  					 pirq);							\
> > +}												\
> > +												\
> > +static inline void panthor_ ## __name ## _irq_mask_enable(struct panthor_irq *pirq, u32 mask)	\
> 
> nit: I think I prefer _irq_{enable,disable}_events() as a name.

Agreed, that seems clearer.

> 
> > +{												\
> > +	guard(spinlock_irqsave)(&pirq->mask_lock);						\
> > +												\
> > +	pirq->mask |= mask;									\
> > +	if (!atomic_read(&pirq->suspended))							\
> > +		gpu_write(pirq->ptdev, __reg_prefix ## _INT_MASK, pirq->mask);			\
> 
> There's still a problem with this solution: you might be re-enabling
> interrupts before the threaded handler is done, causing HW interrupts
> to needlessly fire. You need to repurpose ::suspended into multi-state
> field (ACTIVE, PROCESSING, SUSPENDING, SUSPENDED) to detect the case
> where the interrupt is being processed and only write to _INT_MASK if
> this ::state == ACTIVE.

Hmmm, yeah, I think there's no getting around needing more state
here. I'll do that.

Kind regards,
Nicolas Frattaroli

> > +}												\
> > +												\
> > +static inline void panthor_ ## __name ## _irq_mask_disable(struct panthor_irq *pirq, u32 mask)	\
> > +{												\
> > +	guard(spinlock_irqsave)(&pirq->mask_lock);						\
> > +												\
> > +	pirq->mask &= ~mask;									\
> > +	if (!atomic_read(&pirq->suspended))							\
> > +		gpu_write(pirq->ptdev, __reg_prefix ## _INT_MASK, pirq->mask);			\
> >  }
> >  
> >  extern struct workqueue_struct *panthor_cleanup_wq;
> > 
> 
> 





Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ