lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <aYy1bmorEGWhduHU@google.com>
Date: Wed, 11 Feb 2026 16:59:26 +0000
From: Pranjal Shrivastava <praan@...gle.com>
To: Prakash Gupta <prakash.gupta@....qualcomm.com>
Cc: Robin Murphy <robin.murphy@....com>, Will Deacon <will@...nel.org>,
	Joerg Roedel <joro@...tes.org>,
	Rob Clark <robin.clark@....qualcomm.com>,
	Connor Abbott <cwabbott0@...il.com>, linux-arm-msm@...r.kernel.org,
	linux-arm-kernel@...ts.infradead.org, iommu@...ts.linux.dev,
	linux-kernel@...r.kernel.org,
	Akhil P Oommen <akhilpo@....qualcomm.com>,
	Pratyush Brahma <pratyush.brahma@....qualcomm.com>
Subject: Re: [PATCH] iommu/arm-smmu: Use pm_runtime in fault handlers

On Wed, Feb 11, 2026 at 09:40:29PM +0530, Prakash Gupta wrote:
> 
> 
> On 2/10/2026 6:45 PM, Pranjal Shrivastava wrote:
> > On Tue, Feb 10, 2026 at 04:39:56PM +0530, Prakash Gupta wrote:
> >>
> >>
> >> On 2/3/2026 1:44 AM, Pranjal Shrivastava wrote:
> >>> On Wed, Jan 28, 2026 at 06:44:35PM +0000, Robin Murphy wrote:
> >>>> [ +Pranjal as this might matter for v3 too... ]
> >>>>
> >>>
> >>> Hi Robin,
> >>>
> >>> To weigh in from the arm-smmu-v3 side, we’ve attempted to address the
> >>> "can of worms" regarding power races by leaning on these differences:
> >>>
> >>>  - Threaded IRQs for PRI/Events: In the recent series[1], the PRI and
> >>>    event handlers are fully threaded. This allows us to call 
> >>>    arm_smmu_rpm_get() safely, as the handler can sleep while waiting for
> >>>    the hardware to resume.
> >>>
> >>>  - GERROR Handling: Since GERROR remains a hard IRQ, we handle any
> >>>    pending gerrors in the suspend callback before the SMMU actually
> >>>    powers down. Any GERROR interrupts received while the device was
> >>>    suspended are treated as spurious and ignored.
> >>>
> >>> Thanks,
> >>> Praan
> >>
> >> [1] refer to case where SMMU state is not retained during smmu device
> >> power down, this I think is equally applicable for both context and
> >> global faults.
> >>
> >> Since the ARM SMMU runtime resume triggers a device reset, any pending
> >> faults would be cleared during resume. Here the solution can be to
> >> handle both global and context faults before allowing the SMMU device to
> >> suspend.
> >> With this approach, any hard or threaded IRQ scheduled after the SMMU
> >> device has suspended can be safely ignored.
> >> One concern I see is iommu fault reporting to clients while handling
> >> fault during smmu device suspend.
> > 
> > I believe by the time we've reached suspend it's safe to assume that all
> > clients have been suspended. Thus, we could simply not report the error
> > and instead scream by having a dev_warn_ratelimited about the situation.
> > 
> 
> By reporting error I meant reporting the error to client with
> report_iommu_fault(). I agree that if smmu device is being suspended the
> dma devices should have suspended by now. If so, it should be safe to
> just handle the fault excluding report_iommu_fault() in suspend path and
> complete smmu device suspend. Will update in next patchset.
> 

Yes, that's what I meant, since the client is likely suspended, we can't
call report_iommu_fault() because the client might've registered a
handler which touches some MMIO (accesses registers) while the client
is suspended. Thus, if we see a fault during suspend, we could
potentially log it at an appropriate level and not call
report_iommu_fault()..

Thanks,
Praan

> > 
> >>
> >> Thanks,
> >> Prakash
> >>
> >> [1] https://lore.kernel.org/all/20260126151157.3418145-9-praan@google.com/
> >>
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ