[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <b0380b1b-2583-89fb-72c9-b9a606ee6a8a@redhat.com>
Date: Fri, 26 Jan 2018 22:52:28 -0800
From: Laura Abbott <labbott@...hat.com>
To: Chen Feng <puck.chen@...ilicon.com>,
Liam Mark <lmark@...eaurora.org>,
Sumit Semwal <sumit.semwal@...aro.org>
Cc: devel@...verdev.osuosl.org, Greg KH <gregkh@...uxfoundation.org>,
linaro-mm-sig@...ts.linaro.org, linux-kernel@...r.kernel.org,
Dan Carpenter <dan.carpenter@...cle.com>,
"Xiaqing (A)" <saberlily.xia@...ilicon.com>,
Zhuangluan Su <suzhuangluan@...ilicon.com>
Subject: Re: [Linaro-mm-sig] [PATCH v3] staging: android: ion: Zero CMA
allocated memory
On 01/26/2018 06:04 PM, Chen Feng wrote:
>
>
> On 2018/1/27 1:48, Liam Mark wrote:
>> Since commit 204f672255c2 ("staging: android: ion: Use CMA APIs directly")
>> the CMA API is now used directly and therefore the allocated memory is no
>> longer automatically zeroed.
>>
>> Explicitly zero CMA allocated memory to ensure that no data is exposed to
>> userspace.
>>
>> Fixes: 204f672255c2 ("staging: android: ion: Use CMA APIs directly")
>> Signed-off-by: Liam Mark <lmark@...eaurora.org>
>> ---
>> Changes in v2:
>> - Clean up the commit message.
>> - Add 'Fixes:'
>>
>> Changes in v3:
>> - Add support for highmem pages
>>
>> drivers/staging/android/ion/ion_cma_heap.c | 17 +++++++++++++++++
>> 1 file changed, 17 insertions(+)
>>
>> diff --git a/drivers/staging/android/ion/ion_cma_heap.c b/drivers/staging/android/ion/ion_cma_heap.c
>> index 86196ffd2faf..fa3e4b7e0c9f 100644
>> --- a/drivers/staging/android/ion/ion_cma_heap.c
>> +++ b/drivers/staging/android/ion/ion_cma_heap.c
>> @@ -21,6 +21,7 @@
>> #include <linux/err.h>
>> #include <linux/cma.h>
>> #include <linux/scatterlist.h>
>> +#include <linux/highmem.h>
>>
>> #include "ion.h"
>>
>> @@ -51,6 +52,22 @@ static int ion_cma_allocate(struct ion_heap *heap, struct ion_buffer *buffer,
>> if (!pages)
>> return -ENOMEM;
>>
>> + if (PageHighMem(pages)) {
>> + unsigned long nr_clear_pages = nr_pages;
>> + struct page *page = pages;
>> +
>> + while (nr_clear_pages > 0) {
>> + void *vaddr = kmap_atomic(page);
>> +
>> + memset(vaddr, 0, PAGE_SIZE);
>> + kunmap_atomic(vaddr);
>
> Here. This way may cause performance latency at mapping-memset-umap page one bye one.
>
> Take a look at ion_heap_pages_zero.
>
> Not very critical, arm64 always have linear mapping.
>
This is under a PageHighMem check so arm64 isn't affected. It's also
the same algorithm arm32 dma-mapping.c uses so I'd like to see some
data about the performance improvement before we go changing things
too much.
Thanks,
Laura
>
>> + page++;
>> + nr_clear_pages--;
>> + }
>> + } else {
>> + memset(page_address(pages), 0, size);
>> + }
>> +
>> table = kmalloc(sizeof(*table), GFP_KERNEL);
>> if (!table)
>> goto err;
>>
>
Powered by blists - more mailing lists