[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5A6BDE11.9070403@hisilicon.com>
Date: Sat, 27 Jan 2018 10:04:01 +0800
From: Chen Feng <puck.chen@...ilicon.com>
To: Liam Mark <lmark@...eaurora.org>,
Laura Abbott <labbott@...hat.com>,
"Sumit Semwal" <sumit.semwal@...aro.org>
CC: <devel@...verdev.osuosl.org>, Greg KH <gregkh@...uxfoundation.org>,
<linaro-mm-sig@...ts.linaro.org>, <linux-kernel@...r.kernel.org>,
"Dan Carpenter" <dan.carpenter@...cle.com>,
"Xiaqing (A)" <saberlily.xia@...ilicon.com>,
Zhuangluan Su <suzhuangluan@...ilicon.com>
Subject: Re: [Linaro-mm-sig] [PATCH v3] staging: android: ion: Zero CMA
allocated memory
On 2018/1/27 1:48, Liam Mark wrote:
> Since commit 204f672255c2 ("staging: android: ion: Use CMA APIs directly")
> the CMA API is now used directly and therefore the allocated memory is no
> longer automatically zeroed.
>
> Explicitly zero CMA allocated memory to ensure that no data is exposed to
> userspace.
>
> Fixes: 204f672255c2 ("staging: android: ion: Use CMA APIs directly")
> Signed-off-by: Liam Mark <lmark@...eaurora.org>
> ---
> Changes in v2:
> - Clean up the commit message.
> - Add 'Fixes:'
>
> Changes in v3:
> - Add support for highmem pages
>
> drivers/staging/android/ion/ion_cma_heap.c | 17 +++++++++++++++++
> 1 file changed, 17 insertions(+)
>
> diff --git a/drivers/staging/android/ion/ion_cma_heap.c b/drivers/staging/android/ion/ion_cma_heap.c
> index 86196ffd2faf..fa3e4b7e0c9f 100644
> --- a/drivers/staging/android/ion/ion_cma_heap.c
> +++ b/drivers/staging/android/ion/ion_cma_heap.c
> @@ -21,6 +21,7 @@
> #include <linux/err.h>
> #include <linux/cma.h>
> #include <linux/scatterlist.h>
> +#include <linux/highmem.h>
>
> #include "ion.h"
>
> @@ -51,6 +52,22 @@ static int ion_cma_allocate(struct ion_heap *heap, struct ion_buffer *buffer,
> if (!pages)
> return -ENOMEM;
>
> + if (PageHighMem(pages)) {
> + unsigned long nr_clear_pages = nr_pages;
> + struct page *page = pages;
> +
> + while (nr_clear_pages > 0) {
> + void *vaddr = kmap_atomic(page);
> +
> + memset(vaddr, 0, PAGE_SIZE);
> + kunmap_atomic(vaddr);
Here. This way may cause performance latency at mapping-memset-umap page one bye one.
Take a look at ion_heap_pages_zero.
Not very critical, arm64 always have linear mapping.
> + page++;
> + nr_clear_pages--;
> + }
> + } else {
> + memset(page_address(pages), 0, size);
> + }
> +
> table = kmalloc(sizeof(*table), GFP_KERNEL);
> if (!table)
> goto err;
>
Powered by blists - more mailing lists