[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <464F5FE4.2010607@cosmosbay.com>
Date: Sat, 19 May 2007 22:36:52 +0200
From: Eric Dumazet <dada1@...mosbay.com>
To: David Miller <davem@...emloft.net>, akpm@...ux-foundation.org
CC: dhowells@...hat.com, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] MM : alloc_large_system_hash() can free some memory for
non power-of-two bucketsize
David Miller a écrit :
> From: Eric Dumazet <dada1@...mosbay.com>
> Date: Sat, 19 May 2007 20:07:11 +0200
>
>> Maybe David has an idea how this can be done properly ?
>>
>> ref : http://marc.info/?l=linux-netdev&m=117706074825048&w=2
>
> You need to use __GFP_COMP or similar to make this splitting+freeing
> thing work.
>
> Otherwise the individual pages don't have page references, only
> the head page of the high-order page will.
>
Oh thanks David for the hint.
I added a split_page() call and it seems to work now.
[PATCH] MM : alloc_large_system_hash() can free some memory for non
power-of-two bucketsize
alloc_large_system_hash() is called at boot time to allocate space for several
large hash tables.
Lately, TCP hash table was changed and its bucketsize is not a power-of-two
anymore.
On most setups, alloc_large_system_hash() allocates one big page (order > 0)
with __get_free_pages(GFP_ATOMIC, order). This single high_order page has a
power-of-two size, bigger than the needed size.
We can free all pages that wont be used by the hash table.
On a 1GB i386 machine, this patch saves 128 KB of LOWMEM memory.
TCP established hash table entries: 32768 (order: 6, 393216 bytes)
Signed-off-by: Eric Dumazet <dada1@...mosbay.com>
View attachment "alloc_large.patch" of type "text/plain" (824 bytes)
Powered by blists - more mailing lists