[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1401465492.3645.122.camel@edumazet-glaptop2.roam.corp.google.com>
Date: Fri, 30 May 2014 08:58:12 -0700
From: Eric Dumazet <eric.dumazet@...il.com>
To: David Laight <David.Laight@...LAB.COM>
Cc: "'fugang.duan@...escale.com'" <fugang.duan@...escale.com>,
"ezequiel.garcia@...e-electrons.com"
<ezequiel.garcia@...e-electrons.com>,
"Frank.Li@...escale.com" <Frank.Li@...escale.com>,
"davem@...emloft.net" <davem@...emloft.net>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
"shawn.guo@...aro.org" <shawn.guo@...aro.org>,
"bhutchings@...arflare.com" <bhutchings@...arflare.com>,
"stephen@...workplumber.org" <stephen@...workplumber.org>
Subject: RE: [PATCH v1 4/6] net: fec: Increase buffer descriptor entry number
On Fri, 2014-05-30 at 15:34 +0000, David Laight wrote:
> Software TSO generates lots of separate ethernet frames, there is no
> absolute requirement to be able to put all of them into the tx ring at once.
Yes there is absolute requirement. It is driver responsibility to block
the queue if the available slots in TX ring would not allow the
following packet to be sent.
Thats why a TSO emulation needs to set gso_max_segs to some sane value
(sfc sets it to 100)
In Fugang patch this part was a wrong way to deal with this :
+ if (tso_count_descs(skb) >= fec_enet_txdesc_entry_free(fep)) {
+ if (net_ratelimit())
+ netdev_err(ndev, "tx queue full!\n");
+ return NETDEV_TX_BUSY;
+ }
+
This was copy/pasted from buggy
drivers/net/ethernet/marvell/mv643xx_eth.c
>
> The required size for the tx ring is much more likely to be related
> to any interrupt mitigation that delays the refilling of ring entries.
> 512 sounds like a lot of tx ring entries.
512 slots -> 256 frames (considering SG : headers/payload use 2 desc)
-> 3 ms on a Gbit NIC. Its about the right size.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists