[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <4824D141.9060609@freescale.com>
Date: Fri, 09 May 2008 17:33:37 -0500
From: Timur Tabi <timur@...escale.com>
To: Andi Kleen <andi@...stfloor.org>
CC: lkml <linux-kernel@...r.kernel.org>
Subject: Re: Calling free_pages on part of the memory returned by get_free_pages?
Andi Kleen wrote:
> Timur Tabi <timur@...escale.com> writes:
>
>> According to LDD3, if I call get_free_pages() to allocate X bytes, I have to
>> free all of those pages with free_pages(). The VM internals are a little bit
>> over my head, but I looked at the code and I didn't see why that is a requirement.
>>
>> For example, let's say I want to allocated 6MB of physically-contiguous memory.
>> If I call x = get_free_pages(11) to get 8MB. What happens if I then do
>> "free_pages(x + 6 * 1024 * 1024, 9)"?
>>
>> I remember doing this on the 2.4 kernel, and it never gave me any problems.
>
> It is ok, as long as you don't use compound pages (__GFP_COMP) and call
> split_page() to fix up the reference counts.
>
> Also you do this to save memory right? The large system hash code does it too
> and I used to do it in some 2.4 change with an alloc_pages_exact()
> which never made it into 2.6.
>
> If it's reasonably common we should re-add alloc/get_pages_exact() helper to
> make this pattern clear and easier to use.
Follow-up question:
Say I allocate 8MB with __get_free_pages() and then use multiple free_pages() /
split_page() calls to free the last 3MB. Will the de-allocated blocks be merged
into larger-order chunks, if they're contiguous? That is, if I repeat this process:
1. Allocate 8MB
2. Free 3MB
3. Free 5MB
over and over again, will I totally fragment memory to the point that 8MB
allocates will never work again?
--
Timur Tabi
Linux kernel developer at Freescale
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists