[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.21.1912162331110.168267@chino.kir.corp.google.com>
Date: Mon, 16 Dec 2019 23:32:16 -0800 (PST)
From: David Rientjes <rientjes@...gle.com>
To: Pan Zhang <zhangpan26@...wei.com>
cc: hushiyuan@...wei.com, ulf.hansson@...aro.org, allison@...utok.net,
gregkh@...uxfoundation.org, tglx@...utronix.de,
linux-mmc@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2] mmc: host: use kzalloc instead of kmalloc and
memset
On Tue, 17 Dec 2019, Pan Zhang wrote:
> Signed-off-by: Pan Zhang <zhangpan26@...wei.com>
> ---
> drivers/mmc/host/vub300.c | 12 ++++--------
> 1 file changed, 4 insertions(+), 8 deletions(-)
>
> diff --git a/drivers/mmc/host/vub300.c b/drivers/mmc/host/vub300.c
> index 6ced1b7..e18931d 100644
> --- a/drivers/mmc/host/vub300.c
> +++ b/drivers/mmc/host/vub300.c
> @@ -1227,12 +1227,10 @@ static void __download_offload_pseudocode(struct vub300_mmc_host *vub300,
> size -= 1;
> if (interrupt_size < size) {
> u16 xfer_length = roundup_to_multiple_of_64(interrupt_size);
> - u8 *xfer_buffer = kmalloc(xfer_length, GFP_KERNEL);
> + u8 *xfer_buffer = kzalloc(xfer_length, GFP_KERNEL);
> if (xfer_buffer) {
> int retval;
> memcpy(xfer_buffer, data, interrupt_size);
> - memset(xfer_buffer + interrupt_size, 0,
> - xfer_length - interrupt_size);
> size -= interrupt_size;
> data += interrupt_size;
> retval =
> @@ -1270,12 +1268,10 @@ static void __download_offload_pseudocode(struct vub300_mmc_host *vub300,
> size -= 1;
> if (ts < size) {
> u16 xfer_length = roundup_to_multiple_of_64(ts);
> - u8 *xfer_buffer = kmalloc(xfer_length, GFP_KERNEL);
> + u8 *xfer_buffer = kzalloc(xfer_length, GFP_KERNEL);
> if (xfer_buffer) {
> int retval;
> memcpy(xfer_buffer, data, ts);
> - memset(xfer_buffer + ts, 0,
> - xfer_length - ts);
> size -= ts;
> data += ts;
> retval =
I think the previous code is an optimization since the first
interrupt_size bytes or ts bytes of xfer_buffer would otherwise
unnecessarily be zeroed and then copied to.
Powered by blists - more mailing lists