lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sun, 12 Feb 2017 23:38:53 +0100
From:   Jesper Dangaard Brouer <brouer@...hat.com>
To:     Eric Dumazet <eric.dumazet@...il.com>
Cc:     Tariq Toukan <ttoukan.linux@...il.com>,
        Eric Dumazet <edumazet@...gle.com>,
        "David S . Miller" <davem@...emloft.net>,
        netdev <netdev@...r.kernel.org>,
        Tariq Toukan <tariqt@...lanox.com>,
        Martin KaFai Lau <kafai@...com>,
        Willem de Bruijn <willemb@...gle.com>,
        Brenden Blanco <bblanco@...mgrid.com>,
        Alexei Starovoitov <ast@...nel.org>, brouer@...hat.com
Subject: Re: [PATCH v2 net-next 00/14] mlx4: order-0 allocations and page
 recycling

On Sun, 12 Feb 2017 12:57:46 -0800
Eric Dumazet <eric.dumazet@...il.com> wrote:

> On Sun, 2017-02-12 at 18:31 +0200, Tariq Toukan wrote:
> > On 09/02/2017 6:56 PM, Eric Dumazet wrote:  
> > >> Default, out of box.  
> > > Well. Please report :
> > >
> > > ethtool  -l eth0
> > > ethtool -g eth0  
> > $ ethtool -g p1p1
> > Ring parameters for p1p1:
> > Pre-set maximums:
> > RX:             8192
> > RX Mini:        0
> > RX Jumbo:       0
> > TX:             8192
> > Current hardware settings:
> > RX:             1024
> > RX Mini:        0
> > RX Jumbo:       0
> > TX:             512  
> 
> We are using 4096 slots per RX queue, this is why I could not reproduce
> your results.

Just so others understand this: The number of RX queue slots is
indirectly the size of the page-recycle "cache" in this scheme (that
depend on refcnt tricks to see if page can be reused).  


> A single TCP flow easily can have more than 1024 MSS waiting in its
> receive queue (typical receive window on linux is 6MB/2 )

So, you do need to increase the page-"cache" size, and need this for
real-life cases, interesting.


> I mentioned that having a slightly inflated skb->truesize might have an
> impact in some workloads. (charging for 2048 bytes per MSS instead of
> 1536), but this is not related to mlx4 and should be tweaked in TCP
> stack instead, since this 2048 bytes (half a page on x86) strategy is
> now well spread.


-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  LinkedIn: http://www.linkedin.com/in/brouer

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ