[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d7185563-3a7a-d69e-d3d1-1a2b071aa85d@collabora.com>
Date: Tue, 13 Dec 2022 18:04:04 +0500
From: Muhammad Usama Anjum <usama.anjum@...labora.com>
To: Cyrill Gorcunov <gorcunov@...il.com>
Cc: Muhammad Usama Anjum <usama.anjum@...labora.com>,
Michał Mirosław <emmir@...gle.com>,
Andrei Vagin <avagin@...il.com>,
Danylo Mocherniuk <mdanylo@...gle.com>,
Alexander Viro <viro@...iv.linux.org.uk>,
Andrew Morton <akpm@...ux-foundation.org>,
Suren Baghdasaryan <surenb@...gle.com>,
Greg KH <gregkh@...uxfoundation.org>,
Christian Brauner <brauner@...nel.org>,
Peter Xu <peterx@...hat.com>, Yang Shi <shy828301@...il.com>,
Vlastimil Babka <vbabka@...e.cz>,
Zach O'Keefe <zokeefe@...gle.com>,
"Matthew Wilcox (Oracle)" <willy@...radead.org>,
"Gustavo A. R. Silva" <gustavoars@...nel.org>,
Dan Williams <dan.j.williams@...el.com>, kernel@...labora.com,
Gabriel Krisman Bertazi <krisman@...labora.com>,
David Hildenbrand <david@...hat.com>,
Peter Enderborg <peter.enderborg@...y.com>,
"open list : KERNEL SELFTEST FRAMEWORK"
<linux-kselftest@...r.kernel.org>, Shuah Khan <shuah@...nel.org>,
open list <linux-kernel@...r.kernel.org>,
"open list : PROC FILESYSTEM" <linux-fsdevel@...r.kernel.org>,
"open list : MEMORY MANAGEMENT" <linux-mm@...ck.org>,
Paul Gofman <pgofman@...eweavers.com>
Subject: Re: [PATCH v6 2/3] fs/proc/task_mmu: Implement IOCTL to get and/or
the clear info about PTEs
On 12/13/22 1:42 AM, Cyrill Gorcunov wrote:
> On Wed, Nov 09, 2022 at 03:23:02PM +0500, Muhammad Usama Anjum wrote:
> ...
>> +
>> +static long do_pagemap_sd_cmd(struct mm_struct *mm, struct pagemap_scan_arg *arg)
>> +{
>> + struct mmu_notifier_range range;
>> + unsigned long __user start, end;
>> + struct pagemap_scan_private p;
>> + int ret;
>> +
>> + start = (unsigned long)untagged_addr(arg->start);
>> + if ((!IS_ALIGNED(start, PAGE_SIZE)) || (!access_ok((void __user *)start, arg->len)))
>> + return -EINVAL;
>> +
>> + if (IS_GET_OP(arg) &&
>> + ((arg->vec_len == 0) || (!access_ok((struct page_region *)arg->vec, arg->vec_len))))
>> + return -ENOMEM;
>> +
>> + if (IS_SD_OP(arg) && ((arg->required_mask & PAGEMAP_NONSD_OP_MASK) ||
>> + (arg->anyof_mask & PAGEMAP_NONSD_OP_MASK)))
>> + return -EINVAL;
>> +
>> + end = start + arg->len;
>> + p.max_pages = arg->max_pages;
>> + p.found_pages = 0;
>> + p.flags = arg->flags;
>> + p.required_mask = arg->required_mask;
>> + p.anyof_mask = arg->anyof_mask;
>> + p.excluded_mask = arg->excluded_mask;
>> + p.return_mask = arg->return_mask;
>> + p.vec_index = 0;
>> + p.vec_len = arg->vec_len;
>> +
>> + if (IS_GET_OP(arg)) {
>> + p.vec = vzalloc(arg->vec_len * sizeof(struct page_region));
>> + if (!p.vec)
>> + return -ENOMEM;
>> + } else {
>> + p.vec = NULL;
>> + }
>
> Hi Muhammad! I'm really sorry for diving in such late (unfortunatelly too busy to
> step in yet). Anyway, while in general such interface looks reasonable here are
> few moments which really bothers me: as far as I undertstand you don't need
> vzalloc here, plain vmalloc should works as well since you copy only filled
> results back to userspace. Thank you for reviewing. Correct, I'll update to use vmalloc.
> Next -- there is no restriction on vec_len parameter,
> is not here a door for DoS from userspace? Say I could start a number of ioctl
> on same pagemap and try to allocate very big amount of vec_len in summay causing
> big pressure on kernel's memory. Or I miss something obvious here?
Yes, there is a chance that a large chunk of kernel memory can get
allocated here as vec_len can be very large. We need to think of limiting
this buffer in the current implementation. Any reasonable limit should
work. I'm not sure what would be the reasonable limit. Maybe couple of
hundred MBs? I'll think about it. Or I should update the implementation
such that less amount of intermediate buffer can be used like mincore does.
But this can complicate the implementation further as we are already using
page ranges instead of keeping just the flags. I'll see what can be done.
--
BR,
Muhammad Usama Anjum
Powered by blists - more mailing lists