[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <41b516cb0608040834o1d433f23v2f2ba1a1b05ccbc6@mail.gmail.com>
Date: Fri, 4 Aug 2006 08:34:46 -0700
From: "Chris Leech" <chris.leech@...il.com>
To: "Evgeniy Polyakov" <johnpol@....mipt.ru>
Cc: "Herbert Xu" <herbert@...dor.apana.org.au>, arnd@...dnet.de,
olel@....pl, linux-kernel@...r.kernel.org, netdev@...r.kernel.org
Subject: Re: problems with e1000 and jumboframes
On 8/3/06, Evgeniy Polyakov <johnpol@....mipt.ru> wrote:
> On Fri, Aug 04, 2006 at 03:59:37PM +1000, Herbert Xu (herbert@...dor.apana.org.au) wrote:
> > Interesting. Could you guys post figures on alloc_page speed vs. kmalloc?
>
> They probalby measured kmalloc cache access, which only falls to
> alloc_pages when cache is refilled, so it will be faster for some short
> period of time, but in general (especially for such big-sized
> allocations) it is essencially the same.
I think you're right about that. In particular, I think Jesse was
looking at the impact that changing the drivers buffer allocation
method would have on 1500 byte MTU users. With a running network
driver you should see lots of fixed size allocations hitting the slab
cache, and occasionally causing an alloc_pages. If you replace that
with a call to alloc_pages for every packet that ever gets received
it's a performance hit.
So how many skb allocation schemes do you code into a single driver?
Kmalloc everything, page alloc everything, combination of kmalloc and
page buffers for hardware that does header split? That's three
versions of the drivers receive processing and skb allocation that
need to be maintained.
> > Also, getting memory slower is better than not getting them at all :)
Yep.
- Chris
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists