lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 6 Jun 2017 17:17:54 +0100
From:   Julien Grall <julien.grall@....com>
To:     Boris Ostrovsky <boris.ostrovsky@...cle.com>,
        xen-devel@...ts.xen.org
Cc:     sstabellini@...nel.org, jgross@...e.com,
        linux-kernel@...r.kernel.org, stable@...r.kernel.org,
        Feng Kan <fkan@....com>
Subject: Re: [PATCH] xen/privcmd: Support correctly 64KB page granularity when
 mapping memory

Hi,

It has been reviewed-by Boris but I don't see the patch queued. Would it 
be possible to queue it for 4.12?

Cheers,

On 01/06/17 21:41, Boris Ostrovsky wrote:
> On 06/01/2017 11:38 AM, Julien Grall wrote:
>> Hi Boris,
>>
>> On 01/06/17 16:16, Boris Ostrovsky wrote:
>>> On 06/01/2017 10:01 AM, Julien Grall wrote:
>>>> Hi Boris,
>>>>
>>>> On 01/06/17 14:33, Boris Ostrovsky wrote:
>>>>> On 06/01/2017 08:50 AM, Julien Grall wrote:
>>>>>> Hi Boris,
>>>>>>
>>>>>> On 31/05/17 14:54, Boris Ostrovsky wrote:
>>>>>>> On 05/31/2017 09:03 AM, Julien Grall wrote:
>>>>>>>> Commit 5995a68 "xen/privcmd: Add support for Linux 64KB page
>>>>>>>> granularity" did
>>>>>>>> not go far enough to support 64KB in mmap_batch_fn.
>>>>>>>>
>>>>>>>> The variable 'nr' is the number of 4KB chunk to map. However, when
>>>>>>>> Linux
>>>>>>>> is using 64KB page granularity the array of pages
>>>>>>>> (vma->vm_private_data)
>>>>>>>> contain one page per 64KB. Fix it by incrementing st->index
>>>>>>>> correctly.
>>>>>>>>
>>>>>>>> Furthermore, st->va is not correctly incremented as PAGE_SIZE !=
>>>>>>>> XEN_PAGE_SIZE.
>>>>>>>>
>>>>>>>> Fixes: 5995a68 ("xen/privcmd: Add support for Linux 64KB page
>>>>>>>> granularity")
>>>>>>>> CC: stable@...r.kernel.org
>>>>>>>> Reported-by: Feng Kan <fkan@....com>
>>>>>>>> Signed-off-by: Julien Grall <julien.grall@....com>
>>>>>>>> ---
>>>>>>>>  drivers/xen/privcmd.c | 4 ++--
>>>>>>>>  1 file changed, 2 insertions(+), 2 deletions(-)
>>>>>>>>
>>>>>>>> diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
>>>>>>>> index 7a92a5e1d40c..feca75b07fdd 100644
>>>>>>>> --- a/drivers/xen/privcmd.c
>>>>>>>> +++ b/drivers/xen/privcmd.c
>>>>>>>> @@ -362,8 +362,8 @@ static int mmap_batch_fn(void *data, int nr,
>>>>>>>> void *state)
>>>>>>>>                  st->global_error = 1;
>>>>>>>>          }
>>>>>>>>      }
>>>>>>>> -    st->va += PAGE_SIZE * nr;
>>>>>>>> -    st->index += nr;
>>>>>>>> +    st->va += XEN_PAGE_SIZE * nr;
>>>>>>>> +    st->index += nr / XEN_PFN_PER_PAGE;
>>>>>>>>
>>>>>>>>      return 0;
>>>>>>>>  }
>>>>>>>
>>>>>>>
>>>>>>> Are we still using PAGE_MASK for xen_remap_domain_gfn_array()?
>>>>>>
>>>>>> Do you mean in the xen_xlate_remap_gfn_array implementation? If so
>>>>>> there are no use of PAGE_MASK as the code has been converted to
>>>>>> support 64K page granularity.
>>>>>>
>>>>>> If you mean the x86 version of xen_remap_domain_gfn_array, then we
>>>>>> don't really care as x86 only use 4KB page granularity.
>>>>>
>>>>>
>>>>> I meant right above the change that you made. Should it also be
>>>>> replaced
>>>>> with XEN_PAGE_MASK? (Sorry for being unclear.)
>>>>
>>>> Oh. The code in xen_remap_domain_gfn_array is relying on st->va to be
>>>> page aligned. So I think we want to keep PAGE_MASK here.
>>>
>>> Doe this imply then that 'nr' 4K pages is integral number of PAGE_SIZE
>>> (i.e. (nr*XEN_PAGE_SIZE) % PAGE_SIZE == 0) and if yes --- do we test
>>> this somewhere? I don't see it.
>>
>
> I now see that this should (obviously) stay as PAGE_MASK, so
>
> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@...cle.com>
>
> but
>
>> nr might be smaller for the last batch. But all the intermediate batch
>> should have ((nr * XEN_PAGE_SIZE) % PAGE_SIZE == 0).
>
> how can we have nr not covering full PAGE_SIZEs? If you are using 64K
> pages, how can you map, say, only 4K (if nr==1)?
>
> -boris
>
>>
>> I think the BUILD_BUG_ON in privcmd_ioctl_mmap_batch ensure that all
>> the intermediate batch will always be an integral number of PAGE_SIZE.
>>
>> Cheers,
>>
>

-- 
Julien Grall

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ