[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <472A5A7A.6020508@cosmosbay.com>
Date: Fri, 02 Nov 2007 00:00:10 +0100
From: Eric Dumazet <dada1@...mosbay.com>
To: Christoph Lameter <clameter@....com>
CC: David Miller <davem@...emloft.net>, akpm@...ux-foundation.org,
linux-arch@...r.kernel.org, linux-kernel@...r.kernel.org,
mathieu.desnoyers@...ymtl.ca, penberg@...helsinki.fi
Subject: Re: [patch 0/7] [RFC] SLUB: Improve allocpercpu to reduce per cpu
access overhead
Christoph Lameter a écrit :
> On Thu, 1 Nov 2007, David Miller wrote:
>
>> From: Christoph Lameter <clameter@....com>
>> Date: Thu, 1 Nov 2007 15:15:39 -0700 (PDT)
>>
>>> After boot is complete we allow the reduction of the size of the per cpu
>>> areas . Lets say we only need 128k per cpu. Then the remaining pages will
>>> be returned to the page allocator.
>> You don't know how much you will need. I exhausted the limit on
>> sparc64 very late in the boot process when the last few userland
>> services were starting up.
>
> Well you would be able to specify how much will remain. If not it will
> just keep the 2M reserve around.
>
>> And if I subsequently bring up 100,000 IP tunnels, it will exhaust the
>> per-cpu allocation area.
>
> Each tunnel needs 4 bytes per cpu?
well, if we move last_rx to a percpu var, we need 8 bytes of percpu space per
net_device :)
>
>> You have to make it fully dynamic, there is no way around it.
>
> Na. Some reasonable upper limit needs to be set. If we set that to say
> 32Megabytes and do the virtual mapping then we can just populate the first
> 2M and only allocate the remainder if we need it. Then we need to rely on
> Mel's defrag stuff though defrag memory if we need it.
If a 2MB page is not available, could we revert using 4KB pages ? (like
vmalloc stuff), paying an extra runtime overhead of course.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists