lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 4 Jul 2019 16:06:35 +0300
From:   Ilias Apalodimas <ilias.apalodimas@...aro.org>
To:     Jose Abreu <Jose.Abreu@...opsys.com>
Cc:     Jesper Dangaard Brouer <brouer@...hat.com>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
        "linux-stm32@...md-mailman.stormreply.com" 
        <linux-stm32@...md-mailman.stormreply.com>,
        "linux-arm-kernel@...ts.infradead.org" 
        <linux-arm-kernel@...ts.infradead.org>,
        Joao Pinto <Joao.Pinto@...opsys.com>,
        "David S . Miller" <davem@...emloft.net>,
        Giuseppe Cavallaro <peppe.cavallaro@...com>,
        Alexandre Torgue <alexandre.torgue@...com>,
        Maxime Coquelin <mcoquelin.stm32@...il.com>,
        Maxime Ripard <maxime.ripard@...tlin.com>,
        Chen-Yu Tsai <wens@...e.org>
Subject: Re: [PATCH net-next 3/3] net: stmmac: Introducing support for Page
 Pool

Hi Jose, 

> Thank you all for your review comments !
> 
> From: Ilias Apalodimas <ilias.apalodimas@...aro.org>
> 
> > That's why i was concerned on what will happen on > 1000b frames and what the
> > memory pressure is going to be. 
> > The trade off here is copying vs mapping/unmapping.
> 
> Well, the performance numbers I mentioned are for TSO with default MTU 
> (1500) and using iperf3 with zero-copy. Here follows netperf:
> 

Ok i guess this should be fine. Here's why.
You'll allocate an extra memory from page pool API which equals
the number of descriptors * 1 page.
You also allocate SKB's to copy the data and recycle the page pool buffers.
So page_pool won't add any significant memory pressure since we expect *all*
it's buffers to be recycled. 
The SKBs are allocated anyway in the current driver so bottom line you trade off
some memory (the page_pool buffers) + a memcpy per packet and skip the dma
map/unmap which is the bottleneck in your hardware. 
I think it's fine

Cheers
/Ilias

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ