[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <b2814a36-4021-b2a4-52db-6ac707d32835@quicinc.com>
Date: Wed, 27 Jul 2022 15:44:42 +0800
From: Kassey Li <quic_yingangl@...cinc.com>
To: "Vlastimil Babka (SUSE)" <vbabka@...nel.org>,
Matthew Wilcox <willy@...radead.org>
CC: <akpm@...ux-foundation.org>, <minchan@...nel.org>,
<iamjoonsoo.kim@....com>, <linux-kernel@...r.kernel.org>,
<linux-mm@...ck.org>, <quic_guptap@...cinc.com>
Subject: Re: [PATCH] mm/page_owner.c: allow page_owner with given
start_pfn/count
On 7/26/2022 10:03 PM, Vlastimil Babka (SUSE) wrote:
> On 7/25/22 10:39, Kassey Li wrote:
>> hi, Matthew:
>> sorry for the delay, I just started to learn how to upstream patch, and
>> setup my Thunderbird with plain text only.
>> you are right, two users will cause problem here.
>> the uses case is dump CMA area to understand the page usage in a given
>> cma pool. 2nd, dump whole memory page owner is very time cost, mostly our
>> android device has 8G memory now.
>> I will research and check again, if you have more idea on this , please
>> kindly to share.
>
> You could try employing lseek() to specify the start pfn, and as for end
> pfn, the process can just stop reading and close when it has seen enough?
lseek is a good idea.
read_page_owner start with below
pfn = min_low_pfn + *ppos;
so we need to export the min_low_pfn to user then decide the ppos to seek.
(my_cma.base_pfn - min_low_pfn) is the ppos we want to set.
is there concern to export min_low_pfn ?
or use a mutex lock for my previous debugfs version patch ?
>
>> BR
>> Kassey
>>
>> On 7/22/2022 11:38 PM, Matthew Wilcox wrote:
>>> On Fri, Jul 22, 2022 at 11:08:10PM +0800, Kassey Li wrote:
>>>> by default, page_owner iterates all page from min_low_pfn to
>>>> max_pfn, this cost too much time if we want an alternative pfn range.
>>>>
>>>> with this patch it allows user to set pfn range to dump the page_onwer.
>>>
>>> This is a really bad UI. If two users try to do different ranges at the
>>> same time, it'll go wrong. What use cases are you actually trying to
>>> solve?
>>
>
Powered by blists - more mailing lists