[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <c6e1820a-fb57-b213-aa2f-05787dae06ad@oracle.com>
Date: Fri, 8 Jun 2018 15:21:24 -0400
From: Boris Ostrovsky <boris.ostrovsky@...cle.com>
To: Stefano Stabellini <sstabellini@...nel.org>,
Oleksandr Andrushchenko <andr2000@...il.com>
Cc: xen-devel@...ts.xenproject.org, linux-kernel@...r.kernel.org,
dri-devel@...ts.freedesktop.org, linux-media@...r.kernel.org,
jgross@...e.com, konrad.wilk@...cle.com, daniel.vetter@...el.com,
matthew.d.roper@...el.com, dongwon.kim@...el.com,
Oleksandr Andrushchenko <oleksandr_andrushchenko@...m.com>,
julien.grall@....com
Subject: Re: [Xen-devel] [PATCH v2 5/9] xen/gntdev: Allow mappings for DMA
buffers
On 06/08/2018 01:59 PM, Stefano Stabellini wrote:
>
>>>>>>>> @@ -325,6 +401,14 @@ static int map_grant_pages(struct
>>>>>>>> grant_map
>>>>>>>> *map)
>>>>>>>> map->unmap_ops[i].handle = map->map_ops[i].handle;
>>>>>>>> if (use_ptemod)
>>>>>>>> map->kunmap_ops[i].handle =
>>>>>>>> map->kmap_ops[i].handle;
>>>>>>>> +#ifdef CONFIG_XEN_GRANT_DMA_ALLOC
>>>>>>>> + else if (map->dma_vaddr) {
>>>>>>>> + unsigned long mfn;
>>>>>>>> +
>>>>>>>> + mfn = __pfn_to_mfn(page_to_pfn(map->pages[i]));
>>>>>>> Not pfn_to_mfn()?
>>>>>> I'd love to, but pfn_to_mfn is only defined for x86, not ARM: [1]
>>>>>> and [2]
>>>>>> Thus,
>>>>>>
>>>>>> drivers/xen/gntdev.c:408:10: error: implicit declaration of function
>>>>>> ‘pfn_to_mfn’ [-Werror=implicit-function-declaration]
>>>>>> mfn = pfn_to_mfn(page_to_pfn(map->pages[i]));
>>>>>>
>>>>>> So, I'll keep __pfn_to_mfn
>>>>> How will this work on non-PV x86?
>>>> So, you mean I need:
>>>> #ifdef CONFIG_X86
>>>> mfn = pfn_to_mfn(page_to_pfn(map->pages[i]));
>>>> #else
>>>> mfn = __pfn_to_mfn(page_to_pfn(map->pages[i]));
>>>> #endif
>>>>
>>> I'd rather fix it in ARM code. Stefano, why does ARM uses the
>>> underscored version?
>> Do you want me to add one more patch for ARM to wrap __pfn_to_mfn
>> with static inline for ARM? e.g.
>> static inline ...pfn_to_mfn(...)
>> {
>> __pfn_to_mfn();
>> }
>
> A Xen on ARM guest doesn't actually know the mfns behind its own
> pseudo-physical pages. This is why we stopped using pfn_to_mfn and
> started using pfn_to_bfn instead, which will generally return "pfn",
> unless the page is a foreign grant. See include/xen/arm/page.h.
> pfn_to_bfn was also introduced on x86. For example, see the usage of
> pfn_to_bfn in drivers/xen/swiotlb-xen.c. Otherwise, if you don't care
> about other mapped grants, you can just use pfn_to_gfn, that always
> returns pfn.
I think then this code needs to use pfn_to_bfn().
>
> Also, for your information, we support different page granularities in
> Linux as a Xen guest, see the comment at include/xen/arm/page.h:
>
> /*
> * The pseudo-physical frame (pfn) used in all the helpers is always based
> * on Xen page granularity (i.e 4KB).
> *
> * A Linux page may be split across multiple non-contiguous Xen page so we
> * have to keep track with frame based on 4KB page granularity.
> *
> * PV drivers should never make a direct usage of those helpers (particularly
> * pfn_to_gfn and gfn_to_pfn).
> */
>
> A Linux page could be 64K, but a Xen page is always 4K. A granted page
> is also 4K. We have helpers to take into account the offsets to map
> multiple Xen grants in a single Linux page, see for example
> drivers/xen/grant-table.c:gnttab_foreach_grant. Most PV drivers have
> been converted to be able to work with 64K pages correctly, but if I
> remember correctly gntdev.c is the only remaining driver that doesn't
> support 64K pages yet, so you don't have to deal with it if you don't
> want to.
I believe somewhere in this series there is a test for PAGE_SIZE vs.
XEN_PAGE_SIZE. Right, Oleksandr?
Thanks for the explanation.
-boris
Powered by blists - more mailing lists