[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <9bf91034-a860-4144-858b-c9000964ea1d@jasiiieee>
Date: Tue, 06 Dec 2011 14:44:18 -0500 (EST)
From: "John A. Sullivan III" <jsullivan@...nsourcedevel.com>
To: Dave Taht <dave.taht@...il.com>
Cc: Eric Dumazet <eric.dumazet@...il.com>, netdev@...r.kernel.org,
Rick Jones <rick.jones2@...com>
Subject: Re: Latency difference between fifo and pfifo_fast
----- Original Message -----
> From: "Dave Taht" <dave.taht@...il.com>
> To: "Rick Jones" <rick.jones2@...com>
> Cc: "John A. Sullivan III" <jsullivan@...nsourcedevel.com>, "Eric Dumazet" <eric.dumazet@...il.com>,
> netdev@...r.kernel.org
> Sent: Tuesday, December 6, 2011 1:39:13 PM
> Subject: Re: Latency difference between fifo and pfifo_fast
>
> On Tue, Dec 6, 2011 at 7:20 PM, Rick Jones <rick.jones2@...com>
> wrote:
> > On 12/06/2011 12:51 AM, Eric Dumazet wrote:
> >>
> >> Le mardi 06 décembre 2011 à 03:39 -0500, John A. Sullivan III a
> >> écrit :
> >>
> >>>> ifconfig eth2 txqueuelen 0
> >>>> tc qdisc add dev eth2 root pfifo
> >>>> tc qdisc del dev eth2 root
> >>>>
> >>>>
> >>>>
> >>>>
> >>> Really? I didn't know one could do that. Thanks. However, with
> >>> no
> >>> queue length, do I have a significant risk of dropping packets?
> >>> To
> >>> answer your other response's question, these are Intel quad port
> >>> e1000
> >>> cards. We are frequently pushing them to near line speed so
> >>> 1,000,000,000 / 1534 / 8 = 81,486 pps - John
> >>
> >>
> >> You can remove qdisc layer, since NIC itself has a TX ring queue
> >>
> >> (check exact value with ethtool -g ethX)
> >>
> >> # ethtool -g eth2
> >> Ring parameters for eth2:
> >> Pre-set maximums:
> >> RX: 4078
> >> RX Mini: 0
> >> RX Jumbo: 0
> >> TX: 4078
> >> Current hardware settings:
> >> RX: 254
> >> RX Mini: 0
> >> RX Jumbo: 0
> >> TX: 4078 ---- HERE ----
> >
> >
> > And while you are down at the NIC, if every microsecond is precious
> > (no
> > matter how close to epsilon compared to the latencies of spinning
> > rust :)
> > you might consider disabling interrupt coalescing via ethtool -C.
> >
> > rick jones
>
> Ya know, me being me, and if latency is your real problem, I can't
> help
> but think you'd do better by reducing those tx queues enormously,
> applying QFQ and maybe something like RED on top, would balance
> out the differences between flows and result in a net benefit.
>
> I realize that you are struggling to achieve line rate in the first
> place...
>
> but from where I sit (with asbestos suit on), it would be an
> interesting
> experiment. (I have no data on how much cpu this stuff uses at these
> sort of speeds)
> <snip>
>
Interesting. Would that still be true if all the traffic is the same, i.e., nothing but iSCSI packets on the network? Or would just dumping packets with minimal processing be fastest? Thanks - John
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists