[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1486933066.8227.6.camel@edumazet-glaptop3.roam.corp.google.com>
Date: Sun, 12 Feb 2017 12:57:46 -0800
From: Eric Dumazet <eric.dumazet@...il.com>
To: Tariq Toukan <ttoukan.linux@...il.com>
Cc: Eric Dumazet <edumazet@...gle.com>,
"David S . Miller" <davem@...emloft.net>,
netdev <netdev@...r.kernel.org>,
Tariq Toukan <tariqt@...lanox.com>,
Martin KaFai Lau <kafai@...com>,
Willem de Bruijn <willemb@...gle.com>,
Jesper Dangaard Brouer <brouer@...hat.com>,
Brenden Blanco <bblanco@...mgrid.com>,
Alexei Starovoitov <ast@...nel.org>
Subject: Re: [PATCH v2 net-next 00/14] mlx4: order-0 allocations and page
recycling
On Sun, 2017-02-12 at 18:31 +0200, Tariq Toukan wrote:
> On 09/02/2017 6:56 PM, Eric Dumazet wrote:
> >> Default, out of box.
> > Well. Please report :
> >
> > ethtool -l eth0
> > ethtool -g eth0
> $ ethtool -g p1p1
> Ring parameters for p1p1:
> Pre-set maximums:
> RX: 8192
> RX Mini: 0
> RX Jumbo: 0
> TX: 8192
> Current hardware settings:
> RX: 1024
> RX Mini: 0
> RX Jumbo: 0
> TX: 512
We are using 4096 slots per RX queue, this is why I could not reproduce
your results.
A single TCP flow easily can have more than 1024 MSS waiting in its
receive queue (typical receive window on linux is 6MB/2 )
I mentioned that having a slightly inflated skb->truesize might have an
impact in some workloads. (charging for 2048 bytes per MSS instead of
1536), but this is not related to mlx4 and should be tweaked in TCP
stack instead, since this 2048 bytes (half a page on x86) strategy is
now well spread.
Powered by blists - more mailing lists