[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <08778603-4f15-0fb7-687d-4cf42c8ddbd3@quicinc.com>
Date: Wed, 1 Jun 2022 14:04:39 +0530
From: Sibi Sankar <quic_sibis@...cinc.com>
To: Arnd Bergmann <arnd@...db.de>
CC: <bjorn.andersson@...aro.org>, <linux-kernel@...r.kernel.org>,
<linux-arm-msm@...r.kernel.org>, <sboyd@...nel.org>,
<agross@...nel.org>, <linux-remoteproc@...r.kernel.org>,
<mathieu.poirier@...aro.org>, <mka@...omium.org>
Subject: Re: [PATCH v2] remoteproc: qcom_q6v5_mss: map/unmap metadata region
before/after use
Hey Arnd,
Thanks for taking time to review the patch.
On 5/30/22 9:41 PM, Arnd Bergmann wrote:
> On Wed, May 11, 2022 at 7:57 AM Sibi Sankar <quic_sibis@...cinc.com> wrote:
>>
>> The application processor accessing the dynamically assigned metadata
>> region after assigning it to the remote Q6 would lead to an XPU violation.
>> Fix this by un-mapping the metadata region post firmware header copy. The
>> metadata region is freed only after the modem Q6 is done with fw header
>> authentication.
>>
>> Signed-off-by: Sibi Sankar <quic_sibis@...cinc.com>
>
> Acked-by: Arnd Bergmann <arnd@...db.de>
>
> Sorry for the late reply, this looks reasonable overall. Just two
> small comments:
>
>>
>> - memcpy(ptr, metadata, size);
>> + count = PAGE_ALIGN(size) >> PAGE_SHIFT;
>> + pages = kmalloc_array(count, sizeof(struct page *), GFP_KERNEL);
>> + if (!pages) {
>> + ret = -ENOMEM;
>> + goto free_dma_attrs;
>> + }
>
> If you know a fixed upper bound for the array size, it might be easier to
> put it on the stack.
The metadata consists of the 32bit elf header and SoC dependent variable
number of program headers. Arriving at the upper bound from the spec
seemed futile since the max program headers supported could be > 0xffff.
The best I can do is get the max size of metadata of all the QC SoCs
supported upstream for putting the pages on stack and leave "count" as
the min between the dynamic calculation and upper bound. Would that be
good enough?
>
>> +
>> + for (i = 0; i < count; i++)
>> + pages[i] = nth_page(page, i);
>> +
>> + vaddr = vmap(pages, count, flags, pgprot_dmacoherent(PAGE_KERNEL));
>
> I was a bit unsure about this part, as I don't know how portable this is.
> If the CPU bypasses the cache with pgprot_dmacoherent(), then the
> other side should not use a cacheable access either, but that is a property
> of the hardware that is normally hidden from the driver interface.
>
> It's probably ok here, since the pages are not mapped anywhere else
> and should have no active cache lines.
yup we make sure the other side can access the region only after no
cache lines are active (that's the main problem that we are trying
to solve through this patch).
-Sibi
>
> Arnd
>
Powered by blists - more mailing lists