lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <816da52a-f646-c114-fa6d-9320152a0e79@epam.com>
Date:   Fri, 7 Oct 2022 13:43:45 +0000
From:   Oleksandr Tyshchenko <Oleksandr_Tyshchenko@...m.com>
To:     Xenia Ragiadakou <burzalodowa@...il.com>,
        "xen-devel@...ts.xenproject.org" <xen-devel@...ts.xenproject.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
CC:     Stefano Stabellini <sstabellini@...nel.org>,
        Juergen Gross <jgross@...e.com>,
        Oleksandr Tyshchenko <olekstysh@...il.com>
Subject: Re: [PATCH] xen/virtio: Convert PAGE_SIZE/PAGE_SHIFT/PFN_UP to Xen
 counterparts


On 07.10.22 10:15, Xenia Ragiadakou wrote:
>
> On 10/7/22 00:13, Oleksandr Tyshchenko wrote:
>
> Hi Oleksandr


Hello Xenia


>
>>
>> On 06.10.22 20:59, Xenia Ragiadakou wrote:
>>
>> Hello Xenia
>>
>>>
>>> On 10/6/22 15:09, Oleksandr Tyshchenko wrote:
>>>> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@...m.com>
>>>>
>>>> Although XEN_PAGE_SIZE is equal to PAGE_SIZE (4KB) for now, it would
>>>> be more correct to use Xen specific #define-s as XEN_PAGE_SIZE can
>>>> be changed at some point in the future.
>>>>
>>>> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@...m.com>
>>>> ---
>>>> Cc: Juergen Gross <jgross@...e.com>
>>>> Cc: Xenia Ragiadakou <burzalodowa@...il.com>
>>>>
>>>> As it was proposed at:
>>>> https://urldefense.com/v3/__https://lore.kernel.org/xen-devel/20221005174823.1800761-1-olekstysh@gmail.com/__;!!GF_29dbcQIUBPA!zHt-xZ_7tZc_EM6zva21E_YgwIiEeimFWfsJIpPwAu-TBcnzQhXHqlKzmXmwIcI6uIx_arHNZiaZeHt_428_8p-DyMpd$ 
>>>>
>>>> [lore[.]kernel[.]org]
>>>>
>>>> Should go in only after that series.
>>>> ---
>>>>    drivers/xen/grant-dma-ops.c | 20 ++++++++++----------
>>>>    1 file changed, 10 insertions(+), 10 deletions(-)
>>>>
>>>> diff --git a/drivers/xen/grant-dma-ops.c b/drivers/xen/grant-dma-ops.c
>>>> index c66f56d24013..5392fdc25dca 100644
>>>> --- a/drivers/xen/grant-dma-ops.c
>>>> +++ b/drivers/xen/grant-dma-ops.c
>>>> @@ -31,12 +31,12 @@ static DEFINE_XARRAY_FLAGS(xen_grant_dma_devices,
>>>> XA_FLAGS_LOCK_IRQ);
>>>>      static inline dma_addr_t grant_to_dma(grant_ref_t grant)
>>>>    {
>>>> -    return XEN_GRANT_DMA_ADDR_OFF | ((dma_addr_t)grant << 
>>>> PAGE_SHIFT);
>>>> +    return XEN_GRANT_DMA_ADDR_OFF | ((dma_addr_t)grant <<
>>>> XEN_PAGE_SHIFT);
>>>>    }
>>>
>>> With this change, can the offset added to the dma handle, generated by
>>> grant_to_dma(), be the offset in the page? Couldn't it corrupt the
>>> grant ref?
>>
>>
>> Good point, indeed, I think it could corrupt if guest uses a different
>> than Xen page granularity (i.e 64KB).
>>
>>
>>>
>>>>      static inline grant_ref_t dma_to_grant(dma_addr_t dma)
>>>>    {
>>>> -    return (grant_ref_t)((dma & ~XEN_GRANT_DMA_ADDR_OFF) >>
>>>> PAGE_SHIFT);
>>>> +    return (grant_ref_t)((dma & ~XEN_GRANT_DMA_ADDR_OFF) >>
>>>> XEN_PAGE_SHIFT);
>>>>    }
>>>>      static struct xen_grant_dma_data *find_xen_grant_dma_data(struct
>>>> device *dev)
>>>> @@ -79,7 +79,7 @@ static void *xen_grant_dma_alloc(struct device
>>>> *dev, size_t size,
>>>>                     unsigned long attrs)
>>>>    {
>>>>        struct xen_grant_dma_data *data;
>>>> -    unsigned int i, n_pages = PFN_UP(size);
>>>> +    unsigned int i, n_pages = XEN_PFN_UP(size);
>>>>        unsigned long pfn;
>>>>        grant_ref_t grant;
>>>>        void *ret;
>>>> @@ -91,14 +91,14 @@ static void *xen_grant_dma_alloc(struct device
>>>> *dev, size_t size,
>>>>        if (unlikely(data->broken))
>>>>            return NULL;
>>>>    -    ret = alloc_pages_exact(n_pages * PAGE_SIZE, gfp);
>>>> +    ret = alloc_pages_exact(n_pages * XEN_PAGE_SIZE, gfp);
>>>>        if (!ret)
>>>>            return NULL;
>>>>          pfn = virt_to_pfn(ret);
>>>>          if (gnttab_alloc_grant_reference_seq(n_pages, &grant)) {
>>>> -        free_pages_exact(ret, n_pages * PAGE_SIZE);
>>>> +        free_pages_exact(ret, n_pages * XEN_PAGE_SIZE);
>>>>            return NULL;
>>>>        }
>>>>    @@ -116,7 +116,7 @@ static void xen_grant_dma_free(struct device
>>>> *dev, size_t size, void *vaddr,
>>>>                       dma_addr_t dma_handle, unsigned long attrs)
>>>>    {
>>>>        struct xen_grant_dma_data *data;
>>>> -    unsigned int i, n_pages = PFN_UP(size);
>>>> +    unsigned int i, n_pages = XEN_PFN_UP(size);
>>>>        grant_ref_t grant;
>>>>          data = find_xen_grant_dma_data(dev);
>>>> @@ -138,7 +138,7 @@ static void xen_grant_dma_free(struct device
>>>> *dev, size_t size, void *vaddr,
>>>>          gnttab_free_grant_reference_seq(grant, n_pages);
>>>>    -    free_pages_exact(vaddr, n_pages * PAGE_SIZE);
>>>> +    free_pages_exact(vaddr, n_pages * XEN_PAGE_SIZE);
>>>>    }
>>>>      static struct page *xen_grant_dma_alloc_pages(struct device *dev,
>>>> size_t size,
>>>> @@ -168,7 +168,7 @@ static dma_addr_t xen_grant_dma_map_page(struct
>>>> device *dev, struct page *page,
>>>>                         unsigned long attrs)
>>>>    {
>>>>        struct xen_grant_dma_data *data;
>>>> -    unsigned int i, n_pages = PFN_UP(offset + size);
>>>> +    unsigned int i, n_pages = XEN_PFN_UP(offset + size);
>>>
>>> The offset, here, refers to the offset in the page ...
>>>
>>>>        grant_ref_t grant;
>>>>        dma_addr_t dma_handle;
>>>>    @@ -200,8 +200,8 @@ static void xen_grant_dma_unmap_page(struct
>>>> device *dev, dma_addr_t dma_handle,
>>>>                         unsigned long attrs)
>>>>    {
>>>>        struct xen_grant_dma_data *data;
>>>> -    unsigned long offset = dma_handle & (PAGE_SIZE - 1);
>>>> -    unsigned int i, n_pages = PFN_UP(offset + size);
>>>> +    unsigned long offset = dma_handle & ~XEN_PAGE_MASK;
>>>
>>> ... while, here, it refers to the offset in the grant.
>>> So, the calculated number of grants may differ.
>>
>> Good point, I think you are right, so we need to additionally use
>> xen_offset_in_page() macro in xen_grant_dma_map_page(),
>>
>> something like that to be squashed with current patch:
>>
>>
>> diff --git a/drivers/xen/grant-dma-ops.c b/drivers/xen/grant-dma-ops.c
>> index 9d5eca6d638a..bb984dc05deb 100644
>> --- a/drivers/xen/grant-dma-ops.c
>> +++ b/drivers/xen/grant-dma-ops.c
>> @@ -169,7 +169,7 @@ static dma_addr_t xen_grant_dma_map_page(struct
>> device *dev, struct page *page,
>>                                            unsigned long attrs)
>>    {
>>           struct xen_grant_dma_data *data;
>> -       unsigned int i, n_pages = XEN_PFN_UP(offset + size);
>> +       unsigned int i, n_pages = XEN_PFN_UP(xen_offset_in_page(offset)
>> + size);
>>           grant_ref_t grant;
>>           dma_addr_t dma_handle;
>>
>> @@ -191,7 +191,7 @@ static dma_addr_t xen_grant_dma_map_page(struct
>> device *dev, struct page *page,
>>                                   xen_page_to_gfn(page) + i, dir ==
>> DMA_TO_DEVICE);
>>           }
>>
>> -       dma_handle = grant_to_dma(grant) + offset;
>> +       dma_handle = grant_to_dma(grant) + xen_offset_in_page(offset);
>>
>>           return dma_handle;
>>    }
>>
>> Did I get your point right?
>>
>
> I think it 's more complicated than that.
> Let's say that the offset in page is > XEN_PAGE_SIZE, then the 
> calculation of the number of grants won't take into account the part 
> of the offset that is multiple of the XEN_PAGE_SIZE i.e it will 
> calculate only the strictly necessary number of grants.
> But xen_grant_dma_map_page() grants access to the whole page because, 
> as it can be observed in the code snippet below, it does not take into 
> account the page offset.
>
> for (i = 0; i < n_pages; i++) {
>   gnttab_grant_foreign_access_ref(grant + i, data->backend_domid, 
> xen_page_to_gfn(page) + i, dir == DMA_TO_DEVICE);
> }


Thanks, valid point. Agree it's indeed more complicated. I will comment 
on that later. I have just pushed another fix, it is not related to 
XEN_PAGE_SIZE directly, but also about page offset > PAGE_SIZE

so touches the same code and should be prereq:

https://lore.kernel.org/all/20221007132736.2275574-1-olekstysh@gmail.com/


>
>>>
>>>
>>>> +    unsigned int i, n_pages = XEN_PFN_UP(offset + size);
>>>>        grant_ref_t grant;
>>>>          if (WARN_ON(dir == DMA_NONE))
>>>
>>
>> Thank you.
>>
>>
>
-- 
Regards,

Oleksandr Tyshchenko

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ