[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <caa178fa-e183-48ba-bdf4-2ea001f4b566@huawei.com>
Date: Thu, 23 Mar 2023 11:44:48 +0800
From: "zhangpeng (AS)" <zhangpeng362@...wei.com>
To: Matthew Wilcox <willy@...radead.org>
CC: <linux-mm@...ck.org>, <linux-kernel@...r.kernel.org>,
<akpm@...ux-foundation.org>, <mike.kravetz@...cle.com>,
<vishal.moola@...il.com>, <sidhartha.kumar@...cle.com>,
<wangkefeng.wang@...wei.com>, <sunnanyong@...wei.com>
Subject: Re: [PATCH v2 0/3] userfaultfd: convert userfaultfd functions to use
folios
On 2023/3/14 21:23, Matthew Wilcox wrote:
> On Tue, Mar 14, 2023 at 01:13:47PM +0000, Peng Zhang wrote:
>> From: ZhangPeng<zhangpeng362@...wei.com>
>>
>> This patch series converts several userfaultfd functions to use folios.
>> And this series pass the userfaultfd selftests and the LTP userfaultfd
>> test cases.
> That's what you said about the earlier patchset too. Assuming you
> ran the tests, they need to be improved to fid the bug that was in the
> earlier version of the patches.
I did run the tests both times before sending the patches. However, the
bug in the earlier version patches[1] is a hard corner case[2] to
trigger. To trigger it, we need to call copy_large_folio_from_user()
with allow_pagefault == true, which requires hugetlb_mcopy_atomic_pte()
to return -ENOENT. This means that calling copy_large_folio_from_user()
with allow_pagefault == false failed, i.e. copy_from_user() failed.
Building a self-test that copy_from_user() fails could be difficult.
__mcopy_atomic()
__mcopy_atomic_hugetlb()
hugetlb_mcopy_atomic_pte()
copy_large_folio_from_user(..., ..., false);
// if ret_val > 0, return -ENOENT
copy_from_user()
// copy_from_user() needs to fail
if (err == -ENOENT) copy_large_folio_from_user(..., ..., true);
[1] https://lore.kernel.org/all/20230314033734.481904-3-zhangpeng362@huawei.com/
> -long copy_huge_page_from_user(struct page *dst_page,
> +long copy_large_folio_from_user(struct folio *dst_folio,
> const void __user *usr_src,
> - unsigned int pages_per_huge_page,
> bool allow_pagefault)
> {
> void *page_kaddr;
> unsigned long i, rc = 0;
> - unsigned long ret_val = pages_per_huge_page * PAGE_SIZE;
> + unsigned int nr_pages = folio_nr_pages(dst_folio);
> + unsigned long ret_val = nr_pages * PAGE_SIZE;
> struct page *subpage;
> + struct folio *inner_folio;
>
> - for (i = 0; i < pages_per_huge_page; i++) {
> - subpage = nth_page(dst_page, i);
> + for (i = 0; i < nr_pages; i++) {
> + subpage = folio_page(dst_folio, i);
> + inner_folio = page_folio(subpage);
> if (allow_pagefault)
> - page_kaddr = kmap(subpage);
> + page_kaddr = kmap_local_folio(inner_folio, 0);
> else
> page_kaddr = kmap_atomic(subpage);
> rc = copy_from_user(page_kaddr,
> usr_src + i * PAGE_SIZE, PAGE_SIZE);
> if (allow_pagefault)
> - kunmap(subpage);
> + kunmap_local(page_kaddr);
> else
> kunmap_atomic(page_kaddr);
Thanks,
Peng.
Powered by blists - more mailing lists