[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANaxB-zZFq7VD7tBBUmACUJPE9iVuTyQKfg4Jw82-U_1qw6ALg@mail.gmail.com>
Date: Fri, 4 Aug 2023 14:53:49 -0700
From: Andrei Vagin <avagin@...il.com>
To: Muhammad Usama Anjum <usama.anjum@...labora.com>
Cc: Peter Xu <peterx@...hat.com>, David Hildenbrand <david@...hat.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Michał Mirosław <emmir@...gle.com>,
Danylo Mocherniuk <mdanylo@...gle.com>,
Paul Gofman <pgofman@...eweavers.com>,
Cyrill Gorcunov <gorcunov@...il.com>,
Mike Rapoport <rppt@...nel.org>, Nadav Amit <namit@...are.com>,
Alexander Viro <viro@...iv.linux.org.uk>,
Shuah Khan <shuah@...nel.org>,
Christian Brauner <brauner@...nel.org>,
Yang Shi <shy828301@...il.com>,
Vlastimil Babka <vbabka@...e.cz>,
"Liam R . Howlett" <Liam.Howlett@...cle.com>,
Yun Zhou <yun.zhou@...driver.com>,
Suren Baghdasaryan <surenb@...gle.com>,
Alex Sierra <alex.sierra@....com>,
Matthew Wilcox <willy@...radead.org>,
Pasha Tatashin <pasha.tatashin@...een.com>,
Axel Rasmussen <axelrasmussen@...gle.com>,
"Gustavo A . R . Silva" <gustavoars@...nel.org>,
Dan Williams <dan.j.williams@...el.com>,
linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org,
linux-mm@...ck.org, linux-kselftest@...r.kernel.org,
Greg KH <gregkh@...uxfoundation.org>, kernel@...labora.com,
Michał Mirosław <mirq-linux@...e.qmqm.pl>
Subject: Re: [PATCH v26 2/5] fs/proc/task_mmu: Implement IOCTL to get and
optionally clear info about PTEs
On Thu, Jul 27, 2023 at 2:37 AM Muhammad Usama Anjum
<usama.anjum@...labora.com> wrote:
>
<snip>
> +static long do_pagemap_scan(struct mm_struct *mm, unsigned long uarg)
> +{
> + unsigned long walk_start, walk_end;
> + struct mmu_notifier_range range;
> + struct pagemap_scan_private p;
> + size_t n_ranges_out = 0;
> + int ret;
> +
> + memset(&p, 0, sizeof(p));
> + ret = pagemap_scan_get_args(&p.arg, uarg);
> + if (ret)
> + return ret;
> +
> + ret = pagemap_scan_init_bounce_buffer(&p);
> + if (ret)
> + return ret;
> +
> + /* Protection change for the range is going to happen. */
> + if (p.arg.flags & PM_SCAN_WP_MATCHING) {
> + mmu_notifier_range_init(&range, MMU_NOTIFY_PROTECTION_VMA, 0,
> + mm, p.arg.start, p.arg.end);
> + mmu_notifier_invalidate_range_start(&range);
> + }
> +
> + walk_start = walk_end = p.arg.start;
> + for (; walk_end != p.arg.end; walk_start = walk_end) {
> + int n_out;
> +
> + walk_end = min_t(unsigned long,
> + (walk_start + PAGEMAP_WALK_SIZE) & PAGEMAP_WALK_MASK,
> + p.arg.end);
This approach has performance implications. The basic program that scans
its address space takes around 20-30 seconds, but it has just a few
small mappings. The first optimization that comes to mind is to remove
the PAGEMAP_WALK_SIZE limit and instead halt walk_page_range when the
bounce buffer is full. After draining the buffer, the walk_page_range
function can be restarted.
The test program and perf data can be found here:
https://gist.github.com/avagin/c5a22f3c78f8cb34281602dfe9c43d10
> +
> + ret = mmap_read_lock_killable(mm);
> + if (ret)
> + break;
> + ret = walk_page_range(mm, walk_start, walk_end,
> + &pagemap_scan_ops, &p);
> + mmap_read_unlock(mm);
> +
> + n_out = pagemap_scan_flush_buffer(&p);
> + if (n_out < 0)
> + ret = n_out;
> + else
> + n_ranges_out += n_out;
> +
> + if (ret)
> + break;
> + }
> +
Thanks,
Andrei
Powered by blists - more mailing lists