[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <C27F8246C663564A84BB7AB343977242019B6FEC8F@IRVEXCHCCR01.corp.ad.broadcom.com>
Date: Sat, 21 Jun 2008 10:02:03 -0700
From: "Michael Chan" <mchan@...adcom.com>
To: "'David Miller'" <davem@...emloft.net>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>
cc: "vinay@...ux.vnet.ibm.com" <vinay@...ux.vnet.ibm.com>,
"krkumar2@...ibm.com" <krkumar2@...ibm.com>
Subject: Re: [PATCH 3/3]: tg3: Manage TX backlog in-driver.
David Miller wrote:
> + /* This is a deadlock breaker. tg3_tx() updates the consumer
> + * index, then checks the tx_backlog for emptiness. It also
> + * tries to mitigate work by only flushing the backlog when at
> + * least a certain percentage of space is available. Those
> + * tests in tg3_tx() run lockless.
> + *
> + * Here, we make the two primary memory operations in the
> + * reverse order. The idea is to make sure that one of these
> + * two code paths will process the backlog no matter what the
> + * order of their relative execution might be.
> + *
> + * In short:
> + *
> + * tg3_tx() --> tp->tx_cons = foo; test skb_queue_empty()
> + * tg3_start_xmit() --> __skb_queue_tail(); test tp->tx_cons
> + */
> + if (tg3_tx_avail(tp) > TG3_TX_WAKEUP_THRESH(tp))
> + __tg3_xmit_backlog(tp);
> +}
>
I'm thinking that we should use a different wakeup threshold here
than the one in tg3_tx(). If the 2 threads test for the same
threshold, it increases the likelihood that they will find the
condition to be true at the same time. This contention will
cause tg3_tx() to spin longer for the tx_lock, because
tg3_start_xmit() will call __tg3_xmit_backlog() and keep the
tx_lock longer.
Since 99% of the wakeup should be done in tg3_tx(), we should
increase the threshold here to perhaps 3/4 of the ring.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists