[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4738B437.1000204@cosmosbay.com>
Date: Mon, 12 Nov 2007 21:14:47 +0100
From: Eric Dumazet <dada1@...mosbay.com>
To: "Luck, Tony" <tony.luck@...el.com>
CC: Christoph Lameter <clameter@....com>,
Herbert Xu <herbert@...dor.apana.org.au>,
David Miller <davem@...emloft.net>, akpm@...ux-foundation.org,
linux-arch@...r.kernel.org, linux-kernel@...r.kernel.org,
mathieu.desnoyers@...ymtl.ca, penberg@...helsinki.fi
Subject: Re: [patch 0/7] [RFC] SLUB: Improve allocpercpu to reduce per cpu
access overhead
Luck, Tony a écrit :
>>> Ahh so the need to be able to expand per cpu memory storage on demand
>>> is not as critical as we thought.
>>>
>> Yes, but still desirable for future optimizations.
>>
>> For example, I do think using a per cpu memory storage on net_device refcnt &
>> last_rx could give us some speedups.
>
> We do want to keep a very tight handle on bloat in per-cpu
> allocations. By definition the total allocation is multiplied
> by the number of cpus. Only ia64 has outrageous numbers of
> cpus in a single system image today ... but the trend in
> multi-core chips looks to have a Moore's law arc to it, so
> everyone is going to be looking at lots of cpus before long.
>
I dont think this is a problem. Cpus numbers and ram size are related, even if
Moore didnt predicted it;
Nobody wants to ship/build a 4096 cpus machine with 256 MB of ram inside.
Or call it a GPU and dont expect it to run linux :)
99,9% of linux machines running on earth have less than 8 cpus and less than
1000 ethernet/network devices.
In case we increase the number of cpus on a machine, the limiting factor is
the fact that cpus have to continually exchange on memory bus those highly
touched cache lines that contain refcounters or stats.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists