[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190107102152.d3infdyw3zupu2xj@wfg-t540p.sh.intel.com>
Date: Mon, 7 Jan 2019 18:21:52 +0800
From: Fengguang Wu <fengguang.wu@...el.com>
To: Dave Hansen <dave.hansen@...el.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Linux Memory Management List <linux-mm@...ck.org>,
Yao Yuan <yuan.yao@...el.com>, kvm@...r.kernel.org,
LKML <linux-kernel@...r.kernel.org>, Fan Du <fan.du@...el.com>,
Peng Dong <dongx.peng@...el.com>,
Huang Ying <ying.huang@...el.com>,
Liu Jingqi <jingqi.liu@...el.com>,
Dong Eddie <eddie.dong@...el.com>,
Zhang Yi <yi.z.zhang@...ux.intel.com>,
Dan Williams <dan.j.williams@...el.com>
Subject: Re: [RFC][PATCH v2 11/21] kvm: allocate page table pages from DRAM
On Wed, Jan 02, 2019 at 08:47:25AM -0800, Dave Hansen wrote:
>On 12/26/18 5:14 AM, Fengguang Wu wrote:
>> +static unsigned long __get_dram_free_pages(gfp_t gfp_mask)
>> +{
>> + struct page *page;
>> +
>> + page = __alloc_pages(GFP_KERNEL_ACCOUNT, 0, numa_node_id());
>> + if (!page)
>> + return 0;
>> + return (unsigned long) page_address(page);
>> +}
>
>There seems to be a ton of *policy* baked into these patches. For
>instance: thou shalt not allocate page tables pages from PMEM. That's
>surely not a policy we want to inflict on every Linux user until the end
>of time.
Right. It's straight forward policy for users that care performance.
The project is planned by 3 steps, at this moment we are in phase (1):
1) core functionalities, easy to backport
2) upstream-able total solution
3) upstream when API stabilized
The dumb kernel interface /proc/PID/idle_pages enables doing
the majority policies in user space. However for the other smaller
parts, it looks easier to just implement an obvious policy first.
Then to consider more possibilities.
>I think the more important question is how we can have the specific
>policy that this patch implements, but also leave open room for other
>policies, such as: "I don't care how slow this VM runs, minimize the
>amount of fast memory it eats."
Agreed. I'm open for more ways. We can treat these patches as the
soliciting version. If anyone send reasonable improvements or even
totally different way of doing it, I'd be happy to incorporate.
Thanks,
Fengguang
Powered by blists - more mailing lists