lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Mon, 3 Jun 2019 14:16:43 +0200
From:   Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To:     Geordan Neukum <gneukum1@...il.com>
Cc:     devel@...verdev.osuosl.org, YueHaibing <yuehaibing@...wei.com>,
        Mao Wenan <maowenan@...wei.com>, linux-kernel@...r.kernel.org,
        Nathan Chancellor <natechancellor@...il.com>,
        Dan Carpenter <dan.carpenter@...cle.com>
Subject: Re: [PATCH 5/5] staging: kpc2000: kpc_spi: use devm_* API to manage
 mapped I/O space

On Sun, Jun 02, 2019 at 03:58:37PM +0000, Geordan Neukum wrote:
> The kpc_spi driver does not unmap its I/O space upon error cases in the
> probe() function or upon remove(). Make the driver clean up after itself
> more maintainably by migrating to using the managed resource API.
> 
> Signed-off-by: Geordan Neukum <gneukum1@...il.com>
> ---
>  drivers/staging/kpc2000/kpc2000_spi.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/staging/kpc2000/kpc2000_spi.c b/drivers/staging/kpc2000/kpc2000_spi.c
> index b513432a26ed..32d3ec532e26 100644
> --- a/drivers/staging/kpc2000/kpc2000_spi.c
> +++ b/drivers/staging/kpc2000/kpc2000_spi.c
> @@ -471,7 +471,8 @@ kp_spi_probe(struct platform_device *pldev)
>  		goto free_master;
>  	}
>  
> -	kpspi->phys = (unsigned long)ioremap_nocache(r->start, resource_size(r));
> +	kpspi->phys = (unsigned long)devm_ioremap_nocache(&pldev->dev, r->start,
> +							  resource_size(r));

Why is this being cast?  This should just be an __iomem *, right?

>  	kpspi->base = (u64 __iomem *)kpspi->phys;

Then that cast will go away :)

Anyway, something for a future patch, this one is fine, thanks.

greg k-h

Powered by blists - more mailing lists