[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20070511090138.GA24695@2ka.mipt.ru>
Date: Fri, 11 May 2007 13:01:38 +0400
From: Evgeniy Polyakov <johnpol@....mipt.ru>
To: Krishna Kumar2 <krkumar2@...ibm.com>
Cc: Ian McDonald <ian.mcdonald@...di.co.nz>, netdev@...r.kernel.org,
Rick Jones <rick.jones2@...com>,
Vlad Yasevich <vladislav.yasevich@...com>
Subject: Re: [RFC] New driver API to speed up small packets xmits
On Fri, May 11, 2007 at 10:34:22AM +0530, Krishna Kumar2 (krkumar2@...ibm.com) wrote:
> Not combining packets, I am sending them out in the same sequence it was
> queued. If the xmit failed, the driver's new API returns the skb which
> failed to be sent. This skb and all other linked skbs are requeue'd in
> the reverse order (fofi?) till the next time it is tried again. I see
> that sometimes I can send tx_queue_len packets in one shot and all
> succeeds. But the downside is that in failure case, the packets have to
> be requeue'd.
And what if you have thousand(s) of packets queued and first one has
failed, requeing all the rest one-by-one is not a solution. If it is
being done under heavy lock (with disabled irqs especially) it becomes a
disaster.
I thought of a bit different approach: driver maintains own queue (or
has access to stack's one) and only uses lock to dequeue a packet. If
transmit fails, nothing is requeued, but the same packet is tried until
transmit is completed. If number of transmits failed in order, driver
says it is broken and/or stops queue. Thus one can setup several
descriptors in one go and do it without any locks. Stack calls driver's
xmit function which essentially sets bit that new packets are available,
but driver must have access to qdisk.
If e1000 driver would not be so... so 'uncommon' compared to another
small gige drivers I would try to cook up a patch, but I will not with
e1000 :)
--
Evgeniy Polyakov
-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists