[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ce96c9e9-1082-df68-010e-b759d2ede69a@oracle.com>
Date: Tue, 10 Mar 2020 10:38:24 -0700
From: Mike Kravetz <mike.kravetz@...cle.com>
To: Michal Hocko <mhocko@...nel.org>, Roman Gushchin <guro@...com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Johannes Weiner <hannes@...xchg.org>, linux-mm@...ck.org,
kernel-team@...com, linux-kernel@...r.kernel.org,
Rik van Riel <riel@...riel.com>
Subject: Re: [PATCH v2] mm: hugetlb: optionally allocate gigantic hugepages
using cma
On 3/10/20 1:45 AM, Michal Hocko wrote:
> On Mon 09-03-20 17:25:24, Roman Gushchin wrote:
<snip>
>> +early_param("hugetlb_cma", cmdline_parse_hugetlb_cma);
>> +
>> +void __init hugetlb_cma_reserve(void)
>> +{
>> + unsigned long totalpages = 0;
>> + unsigned long start_pfn, end_pfn;
>> + phys_addr_t size;
>> + int nid, i, res;
>> +
>> + if (!hugetlb_cma_size && !hugetlb_cma_percent)
>> + return;
>> +
>> + if (hugetlb_cma_percent) {
>> + for_each_mem_pfn_range(i, MAX_NUMNODES, &start_pfn, &end_pfn,
>> + NULL)
>> + totalpages += end_pfn - start_pfn;
>> +
>> + size = PAGE_SIZE * (hugetlb_cma_percent * 100 * totalpages) /
>> + 10000UL;
>> + } else {
>> + size = hugetlb_cma_size;
>> + }
>> +
>> + pr_info("hugetlb_cma: reserve %llu, %llu per node\n", size,
>> + size / nr_online_nodes);
>> +
>> + size /= nr_online_nodes;
>> +
>> + for_each_node_state(nid, N_ONLINE) {
>> + unsigned long min_pfn = 0, max_pfn = 0;
>> +
>> + for_each_mem_pfn_range(i, nid, &start_pfn, &end_pfn, NULL) {
>> + if (!min_pfn)
>> + min_pfn = start_pfn;
>> + max_pfn = end_pfn;
>> + }
>
> Do you want to compare the range to the size? But besides that, I
> believe this really needs to be much more careful. I believe you do not
> want to eat a considerable part of the kernel memory because the
> resulting configuration will really struggle (yeah all the low mem/high
> mem problems all over again).
Will it struggle any worse than if the we allocated the same amount of memory
for gigantic pages as is done today? Of course, sys admins may think reserving
memory for CMA is better than pre-allocating and end up reserving a greater
amount.
--
Mike Kravetz
Powered by blists - more mailing lists