lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20260105142914.73507ee7@fedora>
Date: Mon, 5 Jan 2026 14:29:14 +0100
From: Boris Brezillon <boris.brezillon@...labora.com>
To: Nicolas Frattaroli <nicolas.frattaroli@...labora.com>
Cc: Steven Price <steven.price@....com>, Liviu Dudau <liviu.dudau@....com>,
 Maarten Lankhorst <maarten.lankhorst@...ux.intel.com>, Maxime Ripard
 <mripard@...nel.org>, Thomas Zimmermann <tzimmermann@...e.de>, David Airlie
 <airlied@...il.com>, Simona Vetter <simona@...ll.ch>, Chia-I Wu
 <olvaffe@...il.com>, Karunika Choo <karunika.choo@....com>,
 kernel@...labora.com, linux-kernel@...r.kernel.org,
 dri-devel@...ts.freedesktop.org
Subject: Re: [PATCH v6 1/3] drm/panthor: Extend IRQ helpers for mask
 modification/restoration

On Mon, 05 Jan 2026 14:17:55 +0100
Nicolas Frattaroli <nicolas.frattaroli@...labora.com> wrote:

> On Monday, 5 January 2026 12:12:20 Central European Standard Time Boris Brezillon wrote:
> > On Tue, 23 Dec 2025 17:24:58 +0100
> > Nicolas Frattaroli <nicolas.frattaroli@...labora.com> wrote:
> >   
> > > The current IRQ helpers do not guarantee mutual exclusion that covers
> > > the entire transaction from accessing the mask member and modifying the
> > > mask register.
> > > 
> > > This makes it hard, if not impossible, to implement mask modification
> > > helpers that may change one of these outside the normal
> > > suspend/resume/isr code paths.
> > > 
> > > Add a spinlock to struct panthor_irq that protects both the mask member
> > > and register. Acquire it in all code paths that access these, but drop
> > > it before processing the threaded handler function. Then, add the
> > > aforementioned new helpers: mask_enable, mask_disable, and
> > > resume_restore. The first two work by ORing and NANDing the mask bits,
> > > and the latter relies on the new behaviour that panthor_irq::mask is not
> > > set to 0 on suspend.
> > > 
> > > panthor_irq::suspended remains an atomic, as it's necessarily written to
> > > outside the mask_lock in the suspend path.
> > > 
> > > Signed-off-by: Nicolas Frattaroli <nicolas.frattaroli@...labora.com>
> > > ---
> > >  drivers/gpu/drm/panthor/panthor_device.h | 68 +++++++++++++++++++++++++++-----
> > >  1 file changed, 58 insertions(+), 10 deletions(-)
> > > 
> > > diff --git a/drivers/gpu/drm/panthor/panthor_device.h b/drivers/gpu/drm/panthor/panthor_device.h
> > > index f35e52b9546a..bf554cf376fb 100644
> > > --- a/drivers/gpu/drm/panthor/panthor_device.h
> > > +++ b/drivers/gpu/drm/panthor/panthor_device.h
> > > @@ -73,11 +73,14 @@ struct panthor_irq {
> > >  	/** @irq: IRQ number. */
> > >  	int irq;
> > >  
> > > -	/** @mask: Current mask being applied to xxx_INT_MASK. */
> > > +	/** @mask: Values to write to xxx_INT_MASK if active. */
> > >  	u32 mask;
> > >  
> > >  	/** @suspended: Set to true when the IRQ is suspended. */
> > >  	atomic_t suspended;
> > > +
> > > +	/** @mask_lock: protects modifications to _INT_MASK and @mask */
> > > +	spinlock_t mask_lock;
> > >  };
> > >  
> > >  /**
> > > @@ -410,6 +413,8 @@ static irqreturn_t panthor_ ## __name ## _irq_raw_handler(int irq, void *data)
> > >  	struct panthor_irq *pirq = data;							\
> > >  	struct panthor_device *ptdev = pirq->ptdev;						\
> > >  												\
> > > +	guard(spinlock_irqsave)(&pirq->mask_lock);						\
> > > +												\
> > >  	if (atomic_read(&pirq->suspended))							\
> > >  		return IRQ_NONE;								\
> > >  	if (!gpu_read(ptdev, __reg_prefix ## _INT_STAT))					\
> > > @@ -424,9 +429,14 @@ static irqreturn_t panthor_ ## __name ## _irq_threaded_handler(int irq, void *da
> > >  	struct panthor_irq *pirq = data;							\
> > >  	struct panthor_device *ptdev = pirq->ptdev;						\
> > >  	irqreturn_t ret = IRQ_NONE;								\
> > > +	u32 mask;										\
> > > +												\
> > > +	scoped_guard(spinlock_irqsave, &pirq->mask_lock) {					\
> > > +		mask = pirq->mask;								\
> > > +	}											\
> > >  												\
> > >  	while (true) {										\
> > > -		u32 status = gpu_read(ptdev, __reg_prefix ## _INT_RAWSTAT) & pirq->mask;	\
> > > +		u32 status = (gpu_read(ptdev, __reg_prefix ## _INT_RAWSTAT) & mask);		\
> > >  												\
> > >  		if (!status)									\
> > >  			break;									\
> > > @@ -435,26 +445,44 @@ static irqreturn_t panthor_ ## __name ## _irq_threaded_handler(int irq, void *da
> > >  		ret = IRQ_HANDLED;								\
> > >  	}											\
> > >  												\
> > > -	if (!atomic_read(&pirq->suspended))							\
> > > -		gpu_write(ptdev, __reg_prefix ## _INT_MASK, pirq->mask);			\
> > > +	scoped_guard(spinlock_irqsave, &pirq->mask_lock) {					\
> > > +		if (!atomic_read(&pirq->suspended)) {						\
> > > +			/* Only restore the bits that were used and are still enabled */	\
> > > +			gpu_write(ptdev, __reg_prefix ## _INT_MASK,				\
> > > +				  gpu_read(ptdev, __reg_prefix ## _INT_MASK) |			\
> > > +				  (mask & pirq->mask));						\
> > > +		}										\
> > > +	}											\
> > >  												\
> > >  	return ret;										\
> > >  }												\
> > >  												\
> > >  static inline void panthor_ ## __name ## _irq_suspend(struct panthor_irq *pirq)			\
> > >  {												\
> > > -	pirq->mask = 0;										\
> > > -	gpu_write(pirq->ptdev, __reg_prefix ## _INT_MASK, 0);					\
> > > +	scoped_guard(spinlock_irqsave, &pirq->mask_lock) {					\
> > > +		gpu_write(pirq->ptdev, __reg_prefix ## _INT_MASK, 0);				\
> > > +	}											\
> > >  	synchronize_irq(pirq->irq);								\
> > >  	atomic_set(&pirq->suspended, true);							\
> > >  }												\
> > >  												\
> > >  static inline void panthor_ ## __name ## _irq_resume(struct panthor_irq *pirq, u32 mask)	\  
> > 
> > If pirq->mask is encoding the user-selected mask, there's no point
> > passing it as an argument to _irq_resume().  
> 
> There is. I don't want to refactor all of panthor_mmu and the
> stuff it does with the mask. It needs to set-mask-and-resume in a
> race-free manner, and that's not possible unless we keep this API
> around, or we do some heavy refactoring.

That's problematic I think. It means we have two different semantics
for panthor_irq::mask now. One where it directly reflects the mask
wanted by its user (GPU, JOB, PWR) and one where it's not (MMU). 

> Remember that locks in the
> kernel aren't reentrant, so we can't just acquire the lock in
> panthor_mmu, set the mask, and then resume the IRQ, and then drop
> the lock, as we'd be re-acquiring the lock in resume.

But you shouldn't have to, because panthor_irq::mask should always
reflect the user requested mask, so whatever is in panthor_irq::mask at
resume time is the thing we should push to INT_MASK, and that we do
with the ::mask_lock held to avoid races. What we need to do though, is
patch panthor_mmu.c to use _irq_{enable,disable}_events() instead of the
open-coded version we have at the moment.


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ