[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <1330158160.2462.37.camel@edumazet-laptop>
Date: Sat, 25 Feb 2012 09:22:40 +0100
From: Eric Dumazet <eric.dumazet@...il.com>
To: Bill Fink <billfink@...dspring.com>
Cc: Yevgeny Petrilin <yevgenyp@...lanox.com>,
David Miller <davem@...emloft.net>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>
Subject: Re: [PATCH net-next 1/3] mlx4_en: TX ring size default to 1024
Le samedi 25 février 2012 à 01:51 -0500, Bill Fink a écrit :
> For a GigE NIC with a typical ring size of 256, the serialization delay
> for 256 1500 byte packets is:
>
> 1500*8*256/10^9 = ~3.1 msec
>
> For a 10-GigE NIC with a ring size of 1024, the serialization delay
> for 1024 1500 byte packets is:
>
> 1500*8*1024/10^10 = ~1.2 msec
>
> So it's not immediately clear that a ring size of 1024 is unreasonable
> for 10-GigE.
>
Its clear when you take into account packets of 64Kbytes (TSO)
With current hardware and state of linux software, you dont need anymore
very big NIC queues since they bring known drawbacks.
It was true in the past with UP and some timer handlers that could hog
cpu for long periods of time, and when TSO didnt exist.
Hopefully all these cpu hogs are not running in softirq handlers
anymore.
If your workload needs more than ~500 slots, then something is wrong
elsewhere and should be fixed. No more workarounds please.
Now BQL (Byte Queue Limits) is available, a driver should implement it
first before considering big TX rings. Thats a 20 minutes change.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists