[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20251015031952.GA2975353@ax162>
Date: Tue, 14 Oct 2025 20:19:52 -0700
From: Nathan Chancellor <nathan@...nel.org>
To: Alan Stern <stern@...land.harvard.edu>
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Ryan Chen <ryan_chen@...eedtech.com>,
Nick Desaulniers <nick.desaulniers+lkml@...il.com>,
Bill Wendling <morbo@...gle.com>,
Justin Stitt <justinstitt@...gle.com>, linux-usb@...r.kernel.org,
linux-kernel@...r.kernel.org, llvm@...ts.linux.dev
Subject: Re: [PATCH] usb: uhci: Work around bogus clang shift overflow
warning from DMA_BIT_MASK(64)
On Tue, Oct 14, 2025 at 11:07:27PM -0400, Alan Stern wrote:
> On Tue, Oct 14, 2025 at 04:38:19PM -0700, Nathan Chancellor wrote:
> > After commit 18a9ec886d32 ("usb: uhci: Add Aspeed AST2700 support"),
> > clang incorrectly warns:
> >
> > In file included from drivers/usb/host/uhci-hcd.c:855:
> > drivers/usb/host/uhci-platform.c:69:32: error: shift count >= width of type [-Werror,-Wshift-count-overflow]
> > 69 | static const u64 dma_mask_64 = DMA_BIT_MASK(64);
> > | ^~~~~~~~~~~~~~~~
> > include/linux/dma-mapping.h:93:54: note: expanded from macro 'DMA_BIT_MASK'
> > 93 | #define DMA_BIT_MASK(n) (((n) == 64) ? ~0ULL : ((1ULL<<(n))-1))
> > | ^ ~~~
> >
> > clang has a long outstanding and complicated problem [1] with generating
> > a proper control flow graph at global scope, resulting in it being
> > unable to understand that this shift can never happen due to the
> > 'n == 64' check.
> >
> > Restructure the code to do the DMA_BIT_MASK() assignments within
> > uhci_hcd_platform_probe() (i.e., function scope) to avoid this global
> > scope issue.
> >
> > Closes: https://github.com/ClangBuiltLinux/linux/issues/2136
> > Link: https://github.com/ClangBuiltLinux/linux/issues/92 [1]
> > Signed-off-by: Nathan Chancellor <nathan@...nel.org>
> > ---
>
> Do you think you could instead copy the approach used in:
>
> https://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb.git/commit/?id=274f2232a94f6ca626d60288044e13d9a58c7612
>
> IMO it is cleaner, and it also moves the DMA_BIT_MASK() computations
> into a function scope.
Sure, would something like this be what you had in mind?
diff --git a/drivers/usb/host/uhci-platform.c b/drivers/usb/host/uhci-platform.c
index 37607f985cc0..5e02f2ceafb6 100644
--- a/drivers/usb/host/uhci-platform.c
+++ b/drivers/usb/host/uhci-platform.c
@@ -65,13 +65,10 @@ static const struct hc_driver uhci_platform_hc_driver = {
.hub_control = uhci_hub_control,
};
-static const u64 dma_mask_32 = DMA_BIT_MASK(32);
-static const u64 dma_mask_64 = DMA_BIT_MASK(64);
-
static int uhci_hcd_platform_probe(struct platform_device *pdev)
{
struct device_node *np = pdev->dev.of_node;
- const u64 *dma_mask_ptr;
+ bool dma_mask_64 = false;
struct usb_hcd *hcd;
struct uhci_hcd *uhci;
struct resource *res;
@@ -85,11 +82,11 @@ static int uhci_hcd_platform_probe(struct platform_device *pdev)
* Since shared usb code relies on it, set it here for now.
* Once we have dma capability bindings this can go away.
*/
- dma_mask_ptr = (u64 *)of_device_get_match_data(&pdev->dev);
- if (!dma_mask_ptr)
- dma_mask_ptr = &dma_mask_32;
+ if (of_device_get_match_data(&pdev->dev))
+ dma_mask_64 = true;
- ret = dma_coerce_mask_and_coherent(&pdev->dev, *dma_mask_ptr);
+ ret = dma_coerce_mask_and_coherent(&pdev->dev,
+ dma_mask_64 ? DMA_BIT_MASK(64) : DMA_BIT_MASK(32));
if (ret)
return ret;
@@ -200,7 +197,7 @@ static void uhci_hcd_platform_shutdown(struct platform_device *op)
static const struct of_device_id platform_uhci_ids[] = {
{ .compatible = "generic-uhci", },
{ .compatible = "platform-uhci", },
- { .compatible = "aspeed,ast2700-uhci", .data = &dma_mask_64},
+ { .compatible = "aspeed,ast2700-uhci", .data = (void *)1 },
{}
};
MODULE_DEVICE_TABLE(of, platform_uhci_ids);
The
const struct of_device_id *match;
match = of_match_device(dev->dev.driver->of_match_table, &dev->dev);
if (match && match->data)
part of the change you linked to is equivalent to
if (of_device_get_match_data(&dev->dev))
if someone wanted to do a further clean up.
Cheers,
Nathan
Powered by blists - more mailing lists