[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <1366925495.2594.18.camel@bwh-desktop.uk.solarflarecom.com>
Date: Thu, 25 Apr 2013 22:31:35 +0100
From: Ben Hutchings <bhutchings@...arflare.com>
To: Javier Domingo <javierdo1@...il.com>
CC: <netdev@...r.kernel.org>
Subject: Re: DMA allocation driver defined limit?
On Thu, 2013-04-25 at 23:15 +0200, Javier Domingo wrote:
> > I don't see what the network card's memory has got to do with it. I
> > would expect that you can allocate and DMA-map much more memory (number
> > of rings * number of descriptors per ring * maximum length of
> > descriptor).
>
> So the limit is on number of skbuff nuber and it is self-imposed by the card.
>
> > Before you deliver an skb up the stack, you must DMA-unmap the header
> > area. The stack then takes care of limiting the amount of memory in
> > use.
>
> I have found that the limit on rings is consistent, as it only
> allocates another area when the later one is unmapped.
>
> One last question is: If netdev_budget is 300, why the DMA ring size
> is 256 by default in e1000? Shouldn't it be greater than 300, so that
> if it has enought packets in one softirq, it could capture them?
There is no connection between these numbers. Firstly, NAPI doesn't
switch re-enable interrupts on net devices after its budget is exhausted
(indeed, it doesn't know *how* to); it merely allows other soft IRQ
handlers to run. Secondly, drivers should refill their RX rings well
before they are completely empty, and they may therefore handle more
than an entire ring-full of completions before breaking out of NAPI
polling.
Ben.
--
Ben Hutchings, Staff Engineer, Solarflare
Not speaking for my employer; that's the marketing department's job.
They asked us to note that Solarflare product names are trademarked.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists