[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <20080620.162029.63979841.davem@davemloft.net>
Date: Fri, 20 Jun 2008 16:20:29 -0700 (PDT)
From: David Miller <davem@...emloft.net>
To: krkumar2@...ibm.com
Cc: mchan@...adcom.com, netdev@...r.kernel.org,
vinay@...ux.vnet.ibm.com
Subject: Re: [PATCH 3/3]: tg3: Manage TX backlog in-driver.
From: Krishna Kumar2 <krkumar2@...ibm.com>
Date: Fri, 20 Jun 2008 16:18:35 +0530
> Couple of comments:
>
> 1. The modified drivers has a backlog of upto tx_queue_len skbs
> compared to unmodified drivers which had tx_queue_len+q->limit.
> Won't this result in a performance hit since packet drops will
> take place earlier?
I doubt it matters when the tx_queue_len is on the order of 1000
packets as it currently is for ethernet devices . For single stream
TCP tests I've never seen the backlog climb past 128 packets or so.
> 2. __tg3_xmit_backlog() should check for not running too long. This
> also means calling netif_schedule() if tx_backlog is !empty, to
> avoid rotting packets in the backlog queue.
I'm not so sure this is an issue in practice. We can measure
it later to make sure.
The heuristics in the driver only wake up with the wakeup threshold
number of slots become available. So I don't think we ever batch up
more than a certain amount of work.
But I agree it is important to consider this.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists