lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <56FBB709.5050802@arm.com>
Date:	Wed, 30 Mar 2016 12:22:49 +0100
From:	Julien Grall <julien.grall@....com>
To:	Shannon Zhao <zhaoshenglong@...wei.com>,
	Shannon Zhao <shannon.zhao@...aro.org>,
	linux-arm-kernel@...ts.infradead.org,
	stefano.stabellini@...rix.com, david.vrabel@...rix.com
Cc:	devicetree@...r.kernel.org, linux-efi@...r.kernel.org,
	linux-kernel@...r.kernel.org, catalin.marinas@....com,
	will.deacon@....com, peter.huangpeng@...wei.com,
	xen-devel@...ts.xen.org
Subject: Re: [PATCH v7 03/17] Xen: xlate: Use page_to_xen_pfn instead of
 page_to_pfn

Hi Shannon,

On 30/03/16 08:38, Shannon Zhao wrote:
>
>
> On 2016/3/30 0:28, Julien Grall wrote:
>> On 24/03/16 14:44, Shannon Zhao wrote:
>>> Make xen_xlate_map_ballooned_pages work with 64K pages. In that case
>>> Kernel pages are 64K in size but Xen pages remain 4K in size. Xen pfns
>>> refer to 4K pages.
>>>
>>> Signed-off-by: Shannon Zhao <shannon.zhao@...aro.org>
>>> Reviewed-by: Stefano Stabellini <stefano.stabellini@...citrix.com>
>>> ---
>>>    drivers/xen/xlate_mmu.c | 26 ++++++++++++++++----------
>>>    1 file changed, 16 insertions(+), 10 deletions(-)
>>>
>>> diff --git a/drivers/xen/xlate_mmu.c b/drivers/xen/xlate_mmu.c
>>> index 9692656..28f728b 100644
>>> --- a/drivers/xen/xlate_mmu.c
>>> +++ b/drivers/xen/xlate_mmu.c
>>> @@ -207,9 +207,12 @@ int __init
>>> xen_xlate_map_ballooned_pages(xen_pfn_t **gfns, void **virt,
>>>        void *vaddr;
>>>        int rc;
>>>        unsigned int i;
>>> +    unsigned long nr_pages;
>>> +    xen_pfn_t xen_pfn = 0;
>>>
>>>        BUG_ON(nr_grant_frames == 0);
>>> -    pages = kcalloc(nr_grant_frames, sizeof(pages[0]), GFP_KERNEL);
>>> +    nr_pages = DIV_ROUND_UP(nr_grant_frames, XEN_PFN_PER_PAGE);
>>> +    pages = kcalloc(nr_pages, sizeof(pages[0]), GFP_KERNEL);
>>>        if (!pages)
>>>            return -ENOMEM;
>>>
>>> @@ -218,22 +221,25 @@ int __init
>>> xen_xlate_map_ballooned_pages(xen_pfn_t **gfns, void **virt,
>>>            kfree(pages);
>>>            return -ENOMEM;
>>>        }
>>> -    rc = alloc_xenballooned_pages(nr_grant_frames, pages);
>>> +    rc = alloc_xenballooned_pages(nr_pages, pages);
>>>        if (rc) {
>>> -        pr_warn("%s Couldn't balloon alloc %ld pfns rc:%d\n", __func__,
>>> -            nr_grant_frames, rc);
>>> +        pr_warn("%s Couldn't balloon alloc %ld pages rc:%d\n", __func__,
>>> +            nr_pages, rc);
>>>            kfree(pages);
>>>            kfree(pfns);
>>>            return rc;
>>>        }
>>> -    for (i = 0; i < nr_grant_frames; i++)
>>> -        pfns[i] = page_to_pfn(pages[i]);
>>> +    for (i = 0; i < nr_grant_frames; i++) {
>>> +        if ((i % XEN_PFN_PER_PAGE) == 0)
>>> +            xen_pfn = page_to_xen_pfn(pages[i / XEN_PFN_PER_PAGE]);
>>> +        pfns[i] = pfn_to_gfn(xen_pfn++);
>>> +    }
>>
>> Would it be possible to re-use xen_for_each_gfn? This will avoid
>> open-coding the loop to break down the Linux page.
> I don't think so. Using xen_acpi_guest_init will require factoring
> "pfns[i] = pfn_to_gfn(xen_pfn++)" to a function with parameter pfns[i].
> How can we pass pfns[i]?

By using the variable data. Something along those lines:

struct map_balloon_pages
{
       xen_pfn_t *pfns;
       unsigned int idx;
};

static void setup_balloon_gfn(unsigned long gfn, void *data)
{
       struct map_balloon_pages *info = data;


       data->pfns[data->idx] = gfn;
       data->idx++;
}

And then in xen_xlate_map_ballooned_pages

xen_for_each_gfn(pages, nr_grant_frames, setup_balloon_gfn, &data);

Regards,

-- 
Julien Grall

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ