[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <dcf5d655-b8e5-581c-a4fc-b4c7c7865106@collabora.com>
Date: Thu, 6 Jul 2023 10:17:19 +0500
From: Muhammad Usama Anjum <usama.anjum@...labora.com>
To: Andrei Vagin <avagin@...il.com>
Cc: Muhammad Usama Anjum <usama.anjum@...labora.com>,
Peter Xu <peterx@...hat.com>,
David Hildenbrand <david@...hat.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Michał Mirosław
<emmir@...gle.com>, Danylo Mocherniuk <mdanylo@...gle.com>,
Paul Gofman <pgofman@...eweavers.com>,
Cyrill Gorcunov <gorcunov@...il.com>,
Mike Rapoport <rppt@...nel.org>, Nadav Amit <namit@...are.com>,
Alexander Viro <viro@...iv.linux.org.uk>,
Shuah Khan <shuah@...nel.org>,
Christian Brauner <brauner@...nel.org>,
Yang Shi <shy828301@...il.com>,
Vlastimil Babka <vbabka@...e.cz>,
"Liam R . Howlett" <Liam.Howlett@...cle.com>,
Yun Zhou <yun.zhou@...driver.com>,
Suren Baghdasaryan <surenb@...gle.com>,
Alex Sierra <alex.sierra@....com>,
Matthew Wilcox <willy@...radead.org>,
Pasha Tatashin <pasha.tatashin@...een.com>,
Axel Rasmussen <axelrasmussen@...gle.com>,
"Gustavo A . R . Silva" <gustavoars@...nel.org>,
Dan Williams <dan.j.williams@...el.com>,
linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org,
linux-mm@...ck.org, linux-kselftest@...r.kernel.org,
Greg KH <gregkh@...uxfoundation.org>, kernel@...labora.com
Subject: Re: [PATCH v22 2/5] fs/proc/task_mmu: Implement IOCTL to get and
optionally clear info about PTEs
On 7/3/23 8:07 PM, Andrei Vagin wrote:
> On Mon, Jul 03, 2023 at 11:47:37AM +0500, Muhammad Usama Anjum wrote:
>> On 6/30/23 8:01 PM, Andrei Vagin wrote:
>>> On Wed, Jun 28, 2023 at 02:54:23PM +0500, Muhammad Usama Anjum wrote:
>>>> This IOCTL, PAGEMAP_SCAN on pagemap file can be used to get and/or clear
>>>> the info about page table entries. The following operations are supported
>>>> in this ioctl:
>>>> - Get the information if the pages have been written-to (PAGE_IS_WRITTEN),
>>>> file mapped (PAGE_IS_FILE), present (PAGE_IS_PRESENT), swapped
>>>> (PAGE_IS_SWAPPED) or page has pfn zero (PAGE_IS_PFNZERO).
>>>> - Find pages which have been written-to and/or write protect the pages
>>>> (atomic PM_SCAN_OP_GET + PM_SCAN_OP_WP)
>>>>
>>>> This IOCTL can be extended to get information about more PTE bits. The
>>>> entire address range passed by user [start, end) is scanned until either
>>>> the user provided buffer is full or max_pages have been found.
>>>>
>>>> Signed-off-by: Muhammad Usama Anjum <usama.anjum@...labora.com>
>>>
>>> <snip>
>>>
>>>> +
>>>> +static long do_pagemap_scan(struct mm_struct *mm, unsigned long __arg)
>>>> +{
>>>> + struct pm_scan_arg __user *uarg = (struct pm_scan_arg __user *)__arg;
>>>> + unsigned long long start, end, walk_start, walk_end;
>>>> + unsigned long empty_slots, vec_index = 0;
>>>> + struct mmu_notifier_range range;
>>>> + struct page_region __user *vec;
>>>> + struct pagemap_scan_private p;
>>>> + struct pm_scan_arg arg;
>>>> + int ret = 0;
>>>> +
>>>> + if (copy_from_user(&arg, uarg, sizeof(arg)))
>>>> + return -EFAULT;
>>>> +
>>>> + start = untagged_addr((unsigned long)arg.start);
>>>> + end = untagged_addr((unsigned long)arg.end);
>>>> + vec = (struct page_region __user *)untagged_addr((unsigned long)arg.vec);
>>>> +
>>>> + ret = pagemap_scan_args_valid(&arg, start, end, vec);
>>>> + if (ret)
>>>> + return ret;
>>>> +
>>>> + p.max_pages = (arg.max_pages) ? arg.max_pages : ULONG_MAX;
>>>> + p.found_pages = 0;
>>>> + p.required_mask = arg.required_mask;
>>>> + p.anyof_mask = arg.anyof_mask;
>>>> + p.excluded_mask = arg.excluded_mask;
>>>> + p.return_mask = arg.return_mask;
>>>> + p.flags = arg.flags;
>>>> + p.flags |= ((p.required_mask | p.anyof_mask | p.excluded_mask) &
>>>> + PAGE_IS_WRITTEN) ? PM_SCAN_REQUIRE_UFFD : 0;
>>>> + p.cur_buf.start = p.cur_buf.len = p.cur_buf.flags = 0;
>>>> + p.vec_buf = NULL;
>>>> + p.vec_buf_len = PAGEMAP_WALK_SIZE >> PAGE_SHIFT;
>>>> +
>>>> + /*
>>>> + * Allocate smaller buffer to get output from inside the page walk
>>>> + * functions and walk page range in PAGEMAP_WALK_SIZE size chunks. As
>>>> + * we want to return output to user in compact form where no two
>>>> + * consecutive regions should be continuous and have the same flags.
>>>> + * So store the latest element in p.cur_buf between different walks and
>>>> + * store the p.cur_buf at the end of the walk to the user buffer.
>>>> + */
>>>> + if (IS_PM_SCAN_GET(p.flags)) {
>>>> + p.vec_buf = kmalloc_array(p.vec_buf_len, sizeof(*p.vec_buf),
>>>> + GFP_KERNEL);
>>>> + if (!p.vec_buf)
>>>> + return -ENOMEM;
>>>> + }
>>>> +
>>>> + if (IS_PM_SCAN_WP(p.flags)) {
>>>> + mmu_notifier_range_init(&range, MMU_NOTIFY_PROTECTION_VMA, 0,
>>>> + mm, start, end);
>>>> + mmu_notifier_invalidate_range_start(&range);
>>>> + }
>>>> +
>>>> + walk_start = walk_end = start;
>>>> + while (walk_end < end && !ret) {
>>>> + if (IS_PM_SCAN_GET(p.flags)) {
>>>> + p.vec_buf_index = 0;
>>>> +
>>>> + /*
>>>> + * All data is copied to cur_buf first. When more data
>>>> + * is found, we push cur_buf to vec_buf and copy new
>>>> + * data to cur_buf. Subtract 1 from length as the
>>>> + * index of cur_buf isn't counted in length.
>>>> + */
>>>> + empty_slots = arg.vec_len - vec_index;
>>>> + p.vec_buf_len = min(p.vec_buf_len, empty_slots - 1);
>>>> + }
>>>> +
>>>> + ret = mmap_read_lock_killable(mm);
>>>> + if (ret)
>>>> + goto return_status;
>>>> +
>>>> + walk_end = min((walk_start + PAGEMAP_WALK_SIZE) & PAGEMAP_WALK_MASK, end);
>>>> +
>>>> + ret = walk_page_range(mm, walk_start, walk_end,
>>>> + &pagemap_scan_ops, &p);
>>>> + mmap_read_unlock(mm);
>>>> +
>>>> + if (ret && ret != PM_SCAN_FOUND_MAX_PAGES &&
>>>> + ret != PM_SCAN_END_WALK)
>>>> + goto return_status;
>>>> +
>>>> + walk_start = walk_end;
>>>> + if (IS_PM_SCAN_GET(p.flags) && p.vec_buf_index) {
>>>> + if (copy_to_user(&vec[vec_index], p.vec_buf,
>>>> + p.vec_buf_index * sizeof(*p.vec_buf))) {
>>>> + /*
>>>> + * Return error even though the OP succeeded
>>>> + */
>>>> + ret = -EFAULT;
>>>> + goto return_status;
>>>> + }
>>>> + vec_index += p.vec_buf_index;
>>>> + }
>>>> + }
>>>> +
>>>> + if (p.cur_buf.len) {
>>>> + if (copy_to_user(&vec[vec_index], &p.cur_buf, sizeof(p.cur_buf))) {
>>>> + ret = -EFAULT;
>>>> + goto return_status;
>>>> + }
>>>> + vec_index++;
>>>> + }
>>>> +
>>>> + ret = vec_index;
>>>> +
>>>> +return_status:
>>>> + arg.start = (unsigned long)walk_end;
>>>
>>> This doesn't look right. pagemap_scan_pmd_entry can stop early. For
>>> example, it can happen when it hits the max_pages limit. Do I miss
>>> something?
>> The walk_page_range() calls pagemap_scan_pmd_entry(). So whatever status is
>> returned from pagemap_scan_pmd_entry(), walk_page_range() returns to this
>> function where we are handling the status code. After while loop starts,
>> there is only 1 return path. Hence there isn't any path missing where we'll
>> miss setting arg.start.
>
> I mean walk_end isn't actually the end address. The end adress should be
> the next page after the last revised page. Here we don't know whether
> all pages in [walk_start, walk_end) has been revised.
Sorry, understood. Let me post the new patch series.
>
>>
>>>
>>>> + if (copy_to_user(&uarg->start, &arg.start, sizeof(arg.start)))
>>>> + ret = -EFAULT;
>>>> +
>>>> + if (IS_PM_SCAN_WP(p.flags))
>>>> + mmu_notifier_invalidate_range_end(&range);
>>>> +
>>>> + kfree(p.vec_buf);
>>>> + return ret;
>>>> +}
>>>> +
>>
>> --
>> BR,
>> Muhammad Usama Anjum
--
BR,
Muhammad Usama Anjum
Powered by blists - more mailing lists