[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <470AA48B.4050005@opengridcomputing.com>
Date: Mon, 08 Oct 2007 16:43:39 -0500
From: Steve Wise <swise@...ngridcomputing.com>
To: Ben Greear <greearb@...delatech.com>
CC: Rick Jones <rick.jones2@...com>, hadi@...erus.ca,
Evgeniy Polyakov <johnpol@....mipt.ru>, netdev@...r.kernel.org,
Robert Olsson <Robert.Olsson@...a.slu.se>
Subject: Re: pktgen question
Ben Greear wrote:
> Rick Jones wrote:
>>>> Perf-wise, you could clone the skbs up front, then deliver them to
>>>> the nic in a tight loop. This would mitigate the added overhead
>>>> introduced by calling skb_clone() in the loop doing transmits...
>>>
>>> That only works if you are sending a small number of skbs. You can't
>>> pre-clone several minutes worth of 10Gbe traffic
>>> with any normal amount of RAM.
>>
>> Does pktgen really need to allocate anything more than some smallish
>> fraction more than the depth of the driver's transmit queue?
>
> If you want to send sustained high rates of traffic, for more than
> just a trivial amount of time, then you either have to play the current
> trick with the skb_get(), or you have to allocate a real packet each time
> (maybe with skb_clone() or similar, but it's still more overhead than
> the skb_get
> which only bumps a reference count.)
>
> I see no other way, but if you can think of one, please let me know.
>
You can keep freed skb's that were cloned on a free list, then reuse
them once freed. You can detect when the driver frees them by adding a
destroy function to the skb. So what will happen is the set of cloned
skbs needed will eventually settled down to a constent amount and the
amount will be based on the latency involved in transmitting a single
skb. And it should be bounded by the max txq depth. Yes? (or am I all
wet :)
So you would pay the overhead of cloning only until you hit this steady
state.
Whatchathink?
> Thanks,
> Ben
>
-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists