[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <652bb498-8393-4738-a987-9bed31786261@oracle.com>
Date: Tue, 22 May 2018 13:35:49 -0700
From: Mike Kravetz <mike.kravetz@...cle.com>
To: Reinette Chatre <reinette.chatre@...el.com>,
Vlastimil Babka <vbabka@...e.cz>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, linux-api@...r.kernel.org
Cc: Michal Hocko <mhocko@...nel.org>,
Christopher Lameter <cl@...ux.com>,
Guy Shattah <sguy@...lanox.com>,
Anshuman Khandual <khandual@...ux.vnet.ibm.com>,
Michal Nazarewicz <mina86@...a86.com>,
David Nellans <dnellans@...dia.com>,
Laura Abbott <labbott@...hat.com>, Pavel Machek <pavel@....cz>,
Dave Hansen <dave.hansen@...el.com>,
Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [PATCH v2 3/4] mm: add find_alloc_contig_pages() interface
On 05/22/2018 09:41 AM, Reinette Chatre wrote:
> On 5/21/2018 4:48 PM, Mike Kravetz wrote:
>> On 05/21/2018 01:54 AM, Vlastimil Babka wrote:
>>> On 05/04/2018 01:29 AM, Mike Kravetz wrote:
>>>> +/**
>>>> + * find_alloc_contig_pages() -- attempt to find and allocate a contiguous
>>>> + * range of pages
>>>> + * @nr_pages: number of pages to find/allocate
>>>> + * @gfp: gfp mask used to limit search as well as during compaction
>>>> + * @nid: target node
>>>> + * @nodemask: mask of other possible nodes
>>>> + *
>>>> + * Pages can be freed with a call to free_contig_pages(), or by manually
>>>> + * calling __free_page() for each page allocated.
>>>> + *
>>>> + * Return: pointer to 'order' pages on success, or NULL if not successful.
>>>> + */
>>>> +struct page *find_alloc_contig_pages(unsigned long nr_pages, gfp_t gfp,
>>>> + int nid, nodemask_t *nodemask)
>>>> +{
>>>> + unsigned long i, alloc_order, order_pages;
>>>> + struct page *pages;
>>>> +
>>>> + /*
>>>> + * Underlying allocators perform page order sized allocations.
>>>> + */
>>>> + alloc_order = get_count_order(nr_pages);
>>>
>>> So if takes arbitrary nr_pages but convert it to order anyway? I think
>>> that's rather suboptimal and wasteful... e.g. a range could be skipped
>>> because some of the pages added by rounding cannot be migrated away.
>>
>> Yes. My idea with this series was to use existing allocators which are
>> all order based. Let me think about how to do allocation for arbitrary
>> number of allocations.
>> - For less than MAX_ORDER size we rely on the buddy allocator, so we are
>> pretty much stuck with order sized allocation. However, allocations of
>> this size are not really interesting as you can call existing routines
>> directly.
>> - For sizes greater than MAX_ORDER, we know that the allocation size will
>> be at least pageblock sized. So, the isolate/migrate scheme can still
>> be used for full pageblocks. We can then use direct migration for the
>> remaining pages. This does complicate things a bit.
>>
>> I'm guessing that most (?all?) allocations will be order based. The use
>> cases I am aware of (hugetlbfs, Intel Cache Pseudo-Locking, RDMA) are all
>> order based. However, as commented in previous version taking arbitrary
>> nr_pages makes interface more future proof.
>>
>
> I noticed this Cache Pseudo-Locking statement and would like to clarify.
> I have not been following this thread in detail so I would like to
> apologize first if my comments are out of context.
>
> Currently the Cache Pseudo-Locking allocations are order based because I
> assumed it was required by the allocator. The contiguous regions needed
> by Cache Pseudo-Locking will not always be order based - instead it is
> based on the granularity of the cache allocation. One example is a
> platform with 55MB L3 cache that can be divided into 20 equal portions.
> To support Cache Pseudo-Locking on this platform we need to be able to
> allocate contiguous regions at increments of 2816KB (the size of each
> portion). In support of this example platform regions needed would thus
> be 2816KB, 5632KB, 8448KB, etc.
Thank you Reinette. I was not aware of these details. Yours is the most
concrete new use case.
This certainly makes more of a case for arbitrary sized allocations.
--
Mike Kravetz
Powered by blists - more mailing lists