[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <3d1642fb-79f9-4abe-8856-0ee67da9666c@arm.com>
Date: Mon, 25 Nov 2024 15:03:08 +0000
From: Robin Murphy <robin.murphy@....com>
To: Brian Johannesmeyer <bjohannesmeyer@...il.com>,
Tianyu Lan <Tianyu.Lan@...rosoft.com>,
Michael Kelley <mikelley@...rosoft.com>, Christoph Hellwig <hch@....de>,
Marek Szyprowski <m.szyprowski@...sung.com>, iommu@...ts.linux.dev,
linux-kernel@...r.kernel.org
Cc: Raphael Isemann <teemperor@...il.com>,
Cristiano Giuffrida <giuffrida@...vu.nl>, Herbert Bos <h.j.bos@...nl>,
Greg KH <gregkh@...uxfoundation.org>
Subject: Re: [RFC 1/1] swiotlb: Replace BUG_ON() with graceful error handling
On 2024-11-22 7:13 pm, Brian Johannesmeyer wrote:
> Replace the BUG_ON() assertion in swiotlb_release_slots() with a
> conditional check and return. This change prevents a corrupted tlb_addr
> from causing a kernel panic.
Hmm, looking again, how exactly *does* this happen? To get here from
swiotlb_unmap_single(), swiotlb_find_pool() has already determined that
"tlb_addr" is within the range belonging to "mem", so if it is somehow
possible for it to then convert into an out-of-bounds index, maybe that
does actually imply some bug in SWIOTLB itself where "mem" is
misconfigured...
Thanks,
Robin.
> Co-developed-by: Raphael Isemann <teemperor@...il.com>
> Signed-off-by: Raphael Isemann <teemperor@...il.com>
> Signed-off-by: Brian Johannesmeyer <bjohannesmeyer@...il.com>
> ---
> kernel/dma/swiotlb.c | 6 +++++-
> 1 file changed, 5 insertions(+), 1 deletion(-)
>
> diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
> index aa0a4a220719..54b4f9665772 100644
> --- a/kernel/dma/swiotlb.c
> +++ b/kernel/dma/swiotlb.c
> @@ -834,7 +834,11 @@ static void swiotlb_release_slots(struct device *dev, phys_addr_t tlb_addr)
> * While returning the entries to the free list, we merge the entries
> * with slots below and above the pool being returned.
> */
> - BUG_ON(aindex >= mem->nareas);
> + if (unlikely(aindex >= mem->nareas)) {
> + dev_err(dev, "%s: invalid area index (%d >= %d)\n", __func__,
> + aindex, mem->nareas);
> + return;
> + }
>
> spin_lock_irqsave(&area->lock, flags);
> if (index + nslots < ALIGN(index + 1, IO_TLB_SEGSIZE))
Powered by blists - more mailing lists